doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
0909.2454
1
The evolution of genetic systems has been studied through the use of modifier gene models, in which a selectively neutral gene controls genetic transmission of other genes under selection. Analytical studies to obtain a general understanding of the dynamics have faced tradeoffs between different features for tractability. No study has been able to obtain results for arbitrary selection on multiple loci undergoing multiple genetic transformation events. Here techniques from Karlin (1982) are applied to a multilocus model of mutation with multivariate control over mutation rates, and the reduction principle is found to operate. Constraining assumptions are that mutation distributions follow the transition probabilities of reversible Markov chains, and that all loci are tightly linked. The extension of the reduction principle is shown topologically to require a manifold of mutation rate alterations that are neutral for a new modifier allele , below which will cause a new modifierallele to increase when rare, and above which cause it to go extinct. This manifold is the same structure as found in a multivariate multilocus model of recombination modification by Zhivotovsky, Feldman, and Christiansen ( 1994 ). A discussion of the near-equilibrium models that depart from the reduction principle concludes with a conjecture about the structural causes .
A model of mutation rate evolution for multiple loci under arbitrary selection is analyzed. Results are obtained using techniques from Karlin (1982) that overcome the weak selection constraints needed for tractability in prior studies of multilocus event models. A multivariate form of the reduction principle is found: reduction results at individual loci combine topologically to produce a surface of mutation rate alterations that are neutral for a new modifier allele . New mutation rates survive if and only if they fall below this surface - a generalization of the hyperplane found by Zhivotovsky et al. ( 1994 ) for a multilocus recombination modifier. Increases in mutation rates at some loci may evolve if compensated for by decreases at other loci. The strength of selection on the modifier scales in proportion to the number of germline cell divisions, and increases with the number of loci affected. Loci that do not make a difference to marginal fitnesses at equilibrium are not subject to the reduction principle, and under fine tuning of mutation rates would be expected to have higher mutation rates than loci in mutation-selection balance. Other results include the nonexistence of 'viability analogous, Hardy-Weinberg' modifier polymorphisms under multiplicative mutation, and the sufficiency of average transmission rates to encapsulate the effect of modifier polymorphisms on the transmission of loci under selection. A conjecture is offered regarding situations, like recombination in the presence of mutation, that exhibit departures from the reduction principle . Constraints for tractability are: tight linkage of all loci, initial fixation at the modifier locus, and mutation distributions comprising transition probabilities of reversible Markov chains .
[ { "type": "R", "before": "The evolution of genetic systems has been studied through the use of modifier gene models, in which a selectively neutral gene controls genetic transmission of other genes under selection. Analytical studies to obtain a general understanding of the dynamics have faced tradeoffs between different features for tractability. No study has been able to obtain results for arbitrary selection on multiple loci undergoing multiple genetic transformation events. Here", "after": "A model of mutation rate evolution for multiple loci under arbitrary selection is analyzed. Results are obtained using", "start_char_pos": 0, "end_char_pos": 461 }, { "type": "R", "before": "are applied to a multilocus model of mutation with multivariate control over mutation rates, and the reduction principle is found to operate. Constraining assumptions are that mutation distributions follow the transition probabilities of reversible Markov chains, and that all loci are tightly linked. The extension", "after": "that overcome the weak selection constraints needed for tractability in prior studies of multilocus event models. A multivariate form", "start_char_pos": 492, "end_char_pos": 807 }, { "type": "R", "before": "shown topologically to require a manifold", "after": "found: reduction results at individual loci combine topologically to produce a surface", "start_char_pos": 838, "end_char_pos": 879 }, { "type": "R", "before": ", below which will cause a new modifierallele to increase when rare, and above which cause it to go extinct. This manifold is the same structure as found in a multivariate multilocus model of recombination modification by Zhivotovsky, Feldman, and Christiansen (", "after": ". New mutation rates survive if and only if they fall below this surface - a generalization of the hyperplane found by Zhivotovsky et al. (", "start_char_pos": 952, "end_char_pos": 1214 }, { "type": "R", "before": "). A discussion of the near-equilibrium models that depart", "after": ") for a multilocus recombination modifier. Increases in mutation rates at some loci may evolve if compensated for by decreases at other loci. The strength of selection on the modifier scales in proportion to the number of germline cell divisions, and increases with the number of loci affected. Loci that do not make a difference to marginal fitnesses at equilibrium are not subject to the reduction principle, and under fine tuning of mutation rates would be expected to have higher mutation rates than loci in mutation-selection balance. Other results include the nonexistence of 'viability analogous, Hardy-Weinberg' modifier polymorphisms under multiplicative mutation, and the sufficiency of average transmission rates to encapsulate the effect of modifier polymorphisms on the transmission of loci under selection. A conjecture is offered regarding situations, like recombination in the presence of mutation, that exhibit departures", "start_char_pos": 1220, "end_char_pos": 1278 }, { "type": "R", "before": "concludes with a conjecture about the structural causes", "after": ". Constraints for tractability are: tight linkage of all loci, initial fixation at the modifier locus, and mutation distributions comprising transition probabilities of reversible Markov chains", "start_char_pos": 1308, "end_char_pos": 1363 } ]
[ 0, 188, 323, 456, 633, 793, 1060, 1222 ]
0909.2517
1
In this paper we have defined one function that have been used to construct different fractals having fractal dimensions between 1.58 and 2. Also, we tried to calculate the amount of increment of fractal dimension in accordance with the base of the number systems. And in switching of fractals from one base to another, the increment of fractal dimension is constant, which is 1.58, its quite surprising. Further, interestingly enough, these very fractals could be a frame of lyrics for the musicians, as we know the fractal dimension of music is around 1.65 and varies between a high of 1.68 and a low of 1.60. further , at the end we conjecture that the switching form one music fractal to another is nothing but enhancing a constant amount of musical notes in various orientations.
In this paper we have defined one function that has been used to construct different fractals having fractal dimensions between 1.58 and 2. Also, we tried to calculate the amount of increment of fractal dimension in accordance with the base of the number systems. Further, interestingly enough, these very fractals could be a frame of lyrics for the musicians, as we know that the fractal dimension of music is around 1.65 and varies between a high of 1.68 and a low of 1.60. Further , at the end we conjecture that the switching from one music fractal to another is nothing but enhancing a constant amount fractal dimension which might be equivalent to a kind of different sets of musical notes in various orientations.
[ { "type": "R", "before": "have", "after": "has", "start_char_pos": 48, "end_char_pos": 52 }, { "type": "D", "before": "And in switching of fractals from one base to another, the increment of fractal dimension is constant, which is 1.58, its quite surprising.", "after": null, "start_char_pos": 265, "end_char_pos": 404 }, { "type": "A", "before": null, "after": "that", "start_char_pos": 513, "end_char_pos": 513 }, { "type": "R", "before": "further", "after": "Further", "start_char_pos": 613, "end_char_pos": 620 }, { "type": "R", "before": "form", "after": "from", "start_char_pos": 667, "end_char_pos": 671 }, { "type": "A", "before": null, "after": "fractal dimension which might be equivalent to a kind of different sets", "start_char_pos": 744, "end_char_pos": 744 } ]
[ 0, 264, 404 ]
0909.3219
1
This paper studies a contingent claim pricing problem in incomplete markets , based on the risk indifference principle . The seller's dynamic risk indifference price is the payment that makes the risk involved for the seller of a contract equal, at any time, to the risk involved if the contract is not sold and no payment is received. An explicit formula for the dynamic risk indifference price is given as the solution of a one-dimensional linear BSDE with stochastic Lipschitz coefficient . The results show that any convex risk measure used for indifference pricing leads to an equivalent martingale measure. This entails a simple linear representation of the price as the expected derivative payoff under the "risk indifference measure" . From a risk management perspective, the model provides two-sided risk indifference bounds for derivative prices in incomplete markets.
This paper studies a contingent claim pricing problem in markets where the underlying traded assets are illiquid. Based on the risk indifference principle , an explicit formula for the dynamic risk indifference price is given as the difference between the solutions of two one-dimensional BSDEs with stochastic Lipschitz coefficients . The results show that dynamic risk indifference prices provide tighter price bounds than dynamic upper and lower hedging prices. Besides, a linear version of the price is given as the marginal risk price, i.e., the risk indifference price of a vanishing amount of derivatives . From a risk management perspective, the model provides two-sided risk indifference bounds for derivative prices in incomplete markets.
[ { "type": "R", "before": "incomplete markets , based", "after": "markets where the underlying traded assets are illiquid. Based", "start_char_pos": 57, "end_char_pos": 83 }, { "type": "R", "before": ". The seller's dynamic risk indifference price is the payment that makes the risk involved for the seller of a contract equal, at any time, to the risk involved if the contract is not sold and no payment is received. An", "after": ", an", "start_char_pos": 119, "end_char_pos": 338 }, { "type": "R", "before": "solution of a", "after": "difference between the solutions of two", "start_char_pos": 412, "end_char_pos": 425 }, { "type": "R", "before": "linear BSDE", "after": "BSDEs", "start_char_pos": 442, "end_char_pos": 453 }, { "type": "R", "before": "coefficient", "after": "coefficients", "start_char_pos": 480, "end_char_pos": 491 }, { "type": "R", "before": "any convex risk measure used for indifference pricing leads to an equivalent martingale measure. This entails a simple linear representation", "after": "dynamic risk indifference prices provide tighter price bounds than dynamic upper and lower hedging prices. Besides, a linear version", "start_char_pos": 516, "end_char_pos": 656 }, { "type": "R", "before": "as the expected derivative payoff under the \"risk indifference measure\"", "after": "is given as the marginal risk price, i.e., the risk indifference price of a vanishing amount of derivatives", "start_char_pos": 670, "end_char_pos": 741 } ]
[ 0, 120, 335, 493, 612 ]
0909.3219
2
This paper studies a contingent claim pricing problem in markets where the underlying traded assets are illiquid. Based on the risk indifference principle, an explicit formula for the dynamic risk indifference price is given as the difference between the solutions of two one-dimensional BSDEs with stochastic Lipschitz coefficients. The results show that dynamic risk indifference prices provide tighter price bounds than dynamic upper and lower hedging prices. Besides, a linear version of the price is given as the marginal risk price, i.e., the risk indifference price of a vanishing amount of derivatives. From a risk management perspective, the model provides two-sided risk indifference bounds for derivative pricesin incomplete markets .
In the context of an incomplete market with a Brownian filtration and a fixed finite time horizon, this paper proves that for general dynamic convex risk measures, the buyer's and seller's risk indifference prices of a contingent claim are bounded from below and above by the dynamic lower and upper hedging prices, respectively .
[ { "type": "R", "before": "This paper studies a contingent claim pricing problem in markets where the underlying traded assets are illiquid. Based on the risk indifference principle, an explicit formula for the dynamic risk indifference price is given as the difference between the solutions of two one-dimensional BSDEs with stochastic Lipschitz coefficients. The results show that dynamic risk indifference prices provide tighter price bounds than dynamic upper and lower hedging prices. Besides, a linear version of the price is given as the marginal risk price, i.e., the risk indifference price of a vanishing amount of derivatives. From a risk management perspective, the model provides two-sided risk indifference bounds for derivative pricesin incomplete markets", "after": "In the context of an incomplete market with a Brownian filtration and a fixed finite time horizon, this paper proves that for general dynamic convex risk measures, the buyer's and seller's risk indifference prices of a contingent claim are bounded from below and above by the dynamic lower and upper hedging prices, respectively", "start_char_pos": 0, "end_char_pos": 743 } ]
[ 0, 113, 333, 462, 610 ]
0909.3244
1
Financial data give an opportunity to uncover the non-stationarity which may be hidden in many single time-series. Five years of daily Euro/ Dollar trading records in the about three hours following the New York opening session are shown to give an accurate ensemble representation of the self-similar , non-Markovian stochastic process with nonstationary increments recently conjectured to generally underlie financial assets dynamics PNAS%DIFDELCMD < {\bf %%% 104 , 19741 (2007)%DIFDELCMD < ]%%% . Introducing novel quantitative tools in the analysis of non-Markovian time-series we show that empirical non-linear correlators are in remarkable agreement with model predictions based only on the anomalous scaling form of the logarithmic return distribution.
A central problem of Quantitative Finance is that of formulating a probabilistic model of the time evolution of asset prices allowing reliable predictions on their future volatility. As in several natural phenomena, the predictions of such a model must be compared with the data of a single process realization in our records. In order to give statistical significance to such a comparison, assumptions of stationarity for some quantities extracted from the single historical time series, like the distribution of the returns over a given time interval, cannot be avoided. Such assumptions entail the risk of masking or misrepresenting non-stationarities of the underlying process, and of giving an incorrect account of its correlations. Here we overcome this difficulty by showing that five years of daily Euro/ US-Dollar trading records in the about three hours following the New York market opening, provide a rich enough ensemble of histories. The statistics of this ensemble allows to propose and test an adequate model of the stochastic process driving the exchange rate. This turns out to be a non-Markovian, self-similar %DIFDELCMD < {\bf %%% %DIFDELCMD < ]%%% process with non-stationary returns. The empirical ensemble correlators are in agreement with the predictions of this model, which is constructed on the basis of the time-inhomogeneous, anomalous scaling obeyed by the return distribution.
[ { "type": "R", "before": "Financial data give an opportunity to uncover the non-stationarity which may be hidden in many single time-series. Five", "after": "A central problem of Quantitative Finance is that of formulating a probabilistic model of the time evolution of asset prices allowing reliable predictions on their future volatility. As in several natural phenomena, the predictions of such a model must be compared with the data of a single process realization in our records. In order to give statistical significance to such a comparison, assumptions of stationarity for some quantities extracted from the single historical time series, like the distribution of the returns over a given time interval, cannot be avoided. Such assumptions entail the risk of masking or misrepresenting non-stationarities of the underlying process, and of giving an incorrect account of its correlations. Here we overcome this difficulty by showing that five", "start_char_pos": 0, "end_char_pos": 119 }, { "type": "R", "before": "Dollar", "after": "US-Dollar", "start_char_pos": 141, "end_char_pos": 147 }, { "type": "R", "before": "opening session are shown to give an accurate ensemble representation of the", "after": "market opening, provide a rich enough ensemble of histories. The statistics of this ensemble allows to propose and test an adequate model of the stochastic process driving the exchange rate. This turns out to be a non-Markovian,", "start_char_pos": 212, "end_char_pos": 288 }, { "type": "D", "before": ", non-Markovian stochastic process with nonstationary increments recently conjectured to generally underlie financial assets dynamics", "after": null, "start_char_pos": 302, "end_char_pos": 435 }, { "type": "D", "before": "PNAS", "after": null, "start_char_pos": 436, "end_char_pos": 440 }, { "type": "D", "before": "104", "after": null, "start_char_pos": 462, "end_char_pos": 465 }, { "type": "D", "before": ", 19741 (2007)", "after": null, "start_char_pos": 466, "end_char_pos": 480 }, { "type": "R", "before": ". Introducing novel quantitative tools in the analysis of non-Markovian time-series we show that empirical non-linear", "after": "process with non-stationary returns. The empirical ensemble", "start_char_pos": 498, "end_char_pos": 615 }, { "type": "R", "before": "remarkable agreement with model predictions based only on the anomalous scaling form of the logarithmic", "after": "agreement with the predictions of this model, which is constructed on the basis of the time-inhomogeneous, anomalous scaling obeyed by the", "start_char_pos": 635, "end_char_pos": 738 } ]
[ 0, 114, 303 ]
0909.3441
1
In this paper we provide evidence that financial option markets for equity indices give rise to non-trivial dependency structures between its constituents. Thus, if the individual constituent distributions of an equity index are inferred from the single-stock option markets and combined via a Gaussian copula, for example, one fails to explain the steepness of the observed volatility skew of the index. Intuitively, index option prices are encoding higher correlations in cases where the option is particularly sensitive to stress scenarios of the market. As a result, more complex dependency structures emerge than the ones described by Gaussian copulas or (state-independent) linear correlation structures. In this paper we "decode" the index option market and extract this correlation information in order to extend the multi-asset version of Dupire's "local volatility" model . A "local correlation" model (LCM) is introduced for the pricing of multi-asset derivatives . LCM achieves consistency with both the constituent- and index option markets by construction while preserving the efficiency and easy implementation of Dupire's model.
In this paper we provide evidence that financial option markets for equity indices give rise to non-trivial dependency structures between its constituents. Thus, if the individual constituent distributions of an equity index are inferred from the single-stock option markets and combined via a Gaussian copula, for example, one fails to explain the steepness of the observed volatility skew of the index. Intuitively, index option prices are encoding higher correlations in cases where the option is particularly sensitive to stress scenarios of the market. As a result, more complex dependency structures emerge than the ones described by Gaussian copulas or (state-independent) linear correlation structures. In this paper we "decode" the index option market and extract this correlation information in order to extend the multi-asset version of Dupire's "local volatility" model by making correlations a dynamic variable of the market . A "local correlation" model (LCM) is introduced for the pricing of multi-asset derivatives . We show how consistency with the index volatility data can be achieved by construction . LCM achieves consistency with both the constituent- and index option markets by construction while preserving the efficiency and easy implementation of Dupire's model.
[ { "type": "A", "before": null, "after": "by making correlations a dynamic variable of the market", "start_char_pos": 882, "end_char_pos": 882 }, { "type": "A", "before": null, "after": ". We show how consistency with the index volatility data can be achieved by construction", "start_char_pos": 976, "end_char_pos": 976 } ]
[ 0, 155, 404, 557, 710 ]
0909.4440
1
We propose a realistic mechanism accounting for the existence of sub-micrometric protein domains in cell membranes. At the biological level, such membrane domains have been demonstrated to be specialized, in that sense that they congregate one or a few protein species amongst the hundreds of different species that a cell membrane may contain , in order to perform a determined biological task . By analyzing the balance between mixing entropy and protein affinities, we elucidate by statistical mechanics arguments how such protein sorting in distinct domains can be explained without appealing to pre-existing lipidic micro-phase separations, as in the lipid raft scenario. At equilibrium, it is preferable to co-localize certain proteins because the ensuing energetic gain is larger than the concomitant entropic cost. This alternative mechanism is discussed to be compatible with known physical interactions between membrane proteins .
We discuss a realistic scenario, accounting for the existence of sub-micrometric protein domains in cell membranes. At the biological level, such membrane domains have been shown to be specialized, in order to perform a determined biological task, in the sense that they gather one or a few protein species out of the hundreds of different ones that a cell membrane may contain . By analyzing the balance between mixing entropy and protein affinities, we propose that such protein sorting in distinct domains can be explained without appealing to pre-existing lipidic micro-phase separations, as in the lipid raft scenario. We show that the proposed scenario is compatible with known physical interactions between membrane proteins , even if thousands of different species coexist .
[ { "type": "R", "before": "propose a realistic mechanism", "after": "discuss a realistic scenario,", "start_char_pos": 3, "end_char_pos": 32 }, { "type": "R", "before": "demonstrated", "after": "shown", "start_char_pos": 173, "end_char_pos": 185 }, { "type": "R", "before": "that", "after": "order to perform a determined biological task, in the", "start_char_pos": 208, "end_char_pos": 212 }, { "type": "R", "before": "congregate", "after": "gather", "start_char_pos": 229, "end_char_pos": 239 }, { "type": "R", "before": "amongst", "after": "out of", "start_char_pos": 269, "end_char_pos": 276 }, { "type": "R", "before": "species", "after": "ones", "start_char_pos": 303, "end_char_pos": 310 }, { "type": "D", "before": ", in order to perform a determined biological task", "after": null, "start_char_pos": 344, "end_char_pos": 394 }, { "type": "R", "before": "elucidate by statistical mechanics arguments how", "after": "propose that", "start_char_pos": 472, "end_char_pos": 520 }, { "type": "R", "before": "At equilibrium, it is preferable to co-localize certain proteins because the ensuing energetic gain is larger than the concomitant entropic cost. This alternative mechanism is discussed to be", "after": "We show that the proposed scenario is", "start_char_pos": 677, "end_char_pos": 868 }, { "type": "A", "before": null, "after": ", even if thousands of different species coexist", "start_char_pos": 939, "end_char_pos": 939 } ]
[ 0, 115, 396, 676, 822 ]
0909.4765
1
We derive probabilistic representations for the probability density function of the arbitrage price of a financial asset and the price of European call and put options in a linear stochastic volatility model with correlated Brownian noises. In such models the asset price satisfies a linear SDE with coefficient of linearity being the volatility process. Examples of such models are considered, including a log-normal stochastic volatility model . In all examples a closed formula for the density function is given. In the Appendix we present a conditional version of the Donati-Martin and Yor formula .
In this paper we investigate general linear stochastic volatility models with correlated Brownian noises. In such models the asset price satisfies a linear SDE with coefficient of linearity being the volatility process. This class contains among others Black-Scholes model, a log-normal stochastic volatility model and Heston stochastic volatility model. For a linear stochastic volatility model we derive representations for the probability density function of the arbitrage price of a financial asset and the prices of European call and put options. A closed-form formulae for the density function and the prices of European call and put options are given for log-normal stochastic volatility model. We also obtain present some new results for Heston and extended Heston stochastic volatility models .
[ { "type": "R", "before": "We derive probabilistic representations for the probability density function of the arbitrage price of a financial asset and the price of European call and put options in a", "after": "In this paper we investigate general", "start_char_pos": 0, "end_char_pos": 172 }, { "type": "R", "before": "model", "after": "models", "start_char_pos": 202, "end_char_pos": 207 }, { "type": "R", "before": "Examples of such models are considered, including", "after": "This class contains among others Black-Scholes model,", "start_char_pos": 355, "end_char_pos": 404 }, { "type": "R", "before": ". In all examples a closed formula for the density function is given. In the Appendix we present a conditional version of the Donati-Martin and Yor formula", "after": "and Heston stochastic volatility model. For a linear stochastic volatility model we derive representations for the probability density function of the arbitrage price of a financial asset and the prices of European call and put options. A closed-form formulae for the density function and the prices of European call and put options are given for log-normal stochastic volatility model. We also obtain present some new results for Heston and extended Heston stochastic volatility models", "start_char_pos": 446, "end_char_pos": 601 } ]
[ 0, 240, 354, 447, 515 ]
0910.0509
1
Boolean networks are discrete dynamical systems in which the state (zero or one) of each network node at time t is updated to a state determined by the states at time t-1 of those nodes that have links to it. Boolean networks with ` canalizing' update rules have been of great interestin the modeling of genetic control . A canalizing update rule is one for which the node state at time t is determined by the state at time t-1 of a particular one of its inputs when that input is in its canalizing state. In this paper, we introduce a generalized concept of canalization that we believe offers a significant enhancement in biological relevance, and we obtain a simple general network stability criterion for Boolean networks with generalized canalization for a broad class of network topologies .
Boolean networks are discrete dynamical systems in which the state (zero or one) of each node is updated at each time t to a state determined by the states at time t-1 of those nodes that have links to it. When these systems are used to model genetic control, the case of ' canalizing' update rules is of particular interest . A canalizing rule is one for which a node state at time t is determined by the state at time t-1 of a single one of its inputs when that inputting node is in its canalizing state. Previous work on the order/disorder transition in Boolean networks considered complex, non-random network topology. In the current paper we extend this previous work to account for canalizing behavior .
[ { "type": "R", "before": "network node at time t is updated", "after": "node is updated at each time t", "start_char_pos": 89, "end_char_pos": 122 }, { "type": "R", "before": "Boolean networks with `", "after": "When these systems are used to model genetic control, the case of '", "start_char_pos": 209, "end_char_pos": 232 }, { "type": "R", "before": "have been of great interestin the modeling of genetic control", "after": "is of particular interest", "start_char_pos": 258, "end_char_pos": 319 }, { "type": "D", "before": "update", "after": null, "start_char_pos": 335, "end_char_pos": 341 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 364, "end_char_pos": 367 }, { "type": "R", "before": "particular", "after": "single", "start_char_pos": 433, "end_char_pos": 443 }, { "type": "R", "before": "input", "after": "inputting node", "start_char_pos": 472, "end_char_pos": 477 }, { "type": "R", "before": "In this paper, we introduce a generalized concept of canalization that we believe offers a significant enhancement in biological relevance, and we obtain a simple general network stability criterion for Boolean networks with generalized canalization for a broad class of network topologies", "after": "Previous work on the order/disorder transition in Boolean networks considered complex, non-random network topology. In the current paper we extend this previous work to account for canalizing behavior", "start_char_pos": 506, "end_char_pos": 795 } ]
[ 0, 208, 321, 505 ]
0910.1416
1
Human olfactory receptor , OR1D2, binds to Bourgeonal , a chemical constituent of the mythical flower, Lily of the valley or Our Lady's tears (1) . OR1D2, OR1D4 and OR1D5 are three full length olfactory receptors present in an olfactory locus in human genome. These receptors are more than 80\% identical in DNA sequences and have 107 base pair mismatches among them. Apparently, these mismatch positions show no striking pattern using computer pattern recognition tools. In an attempt to find a mathematical rule in those mismatches, we find that L-system generated sequence can be inserted into the OR1D2 subfamily specific star model and novel full length olfactory receptors can be generated. This remarkable mathematical principle could be utilized for making new subfamily OR members from any OR subfamily. Aroma and electronic nose industry might utilize this rule in a large way in near future .
Ligands for only two human olfactory receptors are known. One of them , OR1D2, binds to Bourgeonal Malnic B, Godfrey P-A, Buck L-B (2004) The human olfactory receptor gene family. Proc. Natl. Acad. Sci U. S. A. 101: 2584-2589 and Erratum in: Proc Natl Acad Sci U. S. A. (2004) 101: 7205 . OR1D2, OR1D4 and OR1D5 are three full length olfactory receptors present in an olfactory locus in human genome. These receptors are more than 80\% identical in DNA sequences and have 108 base pair mismatches among them. We have used L-system mathematics and have been able to show a closely related subfamily of OR1D2, OR1D4 and OR1D5 .
[ { "type": "R", "before": "Human olfactory receptor", "after": "Ligands for only two human olfactory receptors are known. One of them", "start_char_pos": 0, "end_char_pos": 24 }, { "type": "R", "before": ", a chemical constituent of the mythical flower, Lily of the valley or Our Lady's tears (1)", "after": "Malnic B, Godfrey P-A, Buck L-B (2004) The human olfactory receptor gene family. Proc. Natl. Acad. Sci U. S. A. 101: 2584-2589 and Erratum in: Proc Natl Acad Sci U. S. A. (2004) 101: 7205", "start_char_pos": 54, "end_char_pos": 145 }, { "type": "R", "before": "107", "after": "108", "start_char_pos": 331, "end_char_pos": 334 }, { "type": "R", "before": "Apparently, these mismatch positions show no striking pattern using computer pattern recognition tools. In an attempt to find a mathematical rule in those mismatches, we find that", "after": "We have used", "start_char_pos": 368, "end_char_pos": 547 }, { "type": "R", "before": "generated sequence can be inserted into the OR1D2 subfamily specific star model and novel full length olfactory receptors can be generated. This remarkable mathematical principle could be utilized for making new subfamily OR members from any OR subfamily. Aroma and electronic nose industry might utilize this rule in a large way in near future", "after": "mathematics and have been able to show a closely related subfamily of OR1D2, OR1D4 and OR1D5", "start_char_pos": 557, "end_char_pos": 901 } ]
[ 0, 259, 367, 471, 696, 812 ]
0910.2309
1
We obtain new closed-form pricing formulas for contingent claims when the asset follows a Dupire-type local volatility model. To obtain the formulas we use the Dyson-Taylor commutator method re- cently developed in [ 7, 8, 10 ] for short time asymptotic expansions of heat kernels, and obtain a family of general explicit closed form approx- imate solutions for both the pricing kernel and derivative price . We also perform analytic as well as a numerical error analysis, and compare our results to other known methods.
We obtain new closed-form pricing formulas for contingent claims when the asset follows a Dupire-type local volatility model. To obtain the formulas we use the Dyson-Taylor commutator method that we have recently developed in [ 5, 6, 8 ] for short-time asymptotic expansions of heat kernels, and obtain a family of general closed-form approximate solutions for both the pricing kernel and derivative price . A bootstrap scheme allows us to extend our method to large time . We also perform analytic as well as a numerical error analysis, and compare our results to other known methods.
[ { "type": "R", "before": "re- cently", "after": "that we have recently", "start_char_pos": 191, "end_char_pos": 201 }, { "type": "R", "before": "7, 8, 10", "after": "5, 6, 8", "start_char_pos": 217, "end_char_pos": 225 }, { "type": "R", "before": "short time", "after": "short-time", "start_char_pos": 232, "end_char_pos": 242 }, { "type": "R", "before": "explicit closed form approx- imate", "after": "closed-form approximate", "start_char_pos": 313, "end_char_pos": 347 }, { "type": "A", "before": null, "after": ". A bootstrap scheme allows us to extend our method to large time", "start_char_pos": 407, "end_char_pos": 407 } ]
[ 0, 125, 409 ]
0910.2465
1
Arrow's theorem implies that a social choice function satisfying Transitivity, the Pareto Principle (Unanimity) and Independence of Irrelevant Alternatives (IIA) must be dictatorial. When non-strict preferences are allowed, a dictatorial social choice function is defined as a function for which there exists a single voter whose strict preferences are followed. This definition allows for many different dictatorial functions. In particular, we construct examples of dictatorial functions which do not satisfy Transitivity and IIA. Thus Arrow's theorem, in the case of non-strict preferences, does not provide a complete characterization of all social choice functions satisfying Transitivity, the Pareto Principle and IIA. The main results of this paper provide such a characterization for Arrow's theorem, as well as for follow up results by Wilson. In particular, we strengthen Arrow's and Wilson's result by giving an exact if and only if condition for a function to satisfy Transitivity and IIA (and the Pareto Principle). Additionally, we derive formulas for the number of functions satisfying these conditions.
Arrow's theorem implies that a social choice function satisfying Transitivity, the Pareto Principle (Unanimity) and Independence of Irrelevant Alternatives (IIA) must be dictatorial. When non-strict preferences are allowed, a dictatorial social choice function is defined as a function for which there exists a single voter whose strict preferences are followed. This definition allows for many different dictatorial functions. In particular, we construct examples of dictatorial functions which do not satisfy Transitivity and IIA. Thus Arrow's theorem, in the case of non-strict preferences, does not provide a complete characterization of all social choice functions satisfying Transitivity, the Pareto Principle , and IIA. The main results of this article provide such a characterization for Arrow's theorem, as well as for follow up results by Wilson. In particular, we strengthen Arrow's and Wilson's result by giving an exact if and only if condition for a function to satisfy Transitivity and IIA (and the Pareto Principle). Additionally, we derive formulas for the number of functions satisfying these conditions.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 716, "end_char_pos": 716 }, { "type": "R", "before": "paper", "after": "article", "start_char_pos": 751, "end_char_pos": 756 } ]
[ 0, 182, 362, 427, 532, 725, 853, 1029 ]
0910.3936
1
The choice of admissible trading strategies in mathematical modelling of financial markets is a delicate issue, going back to Harrison and Kreps (1979). In the context of optimal portfolio selection with expected utility preferences this question has been a focus of considerable attention over the last twenty years. We propose a novel notion of admissibility that has many pleasant features -- admissibility is characterized purely under the objective measure P ; the wealth of any admissible strategy is a supermartingale under all pricing measures; local boundedness of the price process is not required; neither strict monotonicity, strict concavity nor differentiability of the utility function are necessary; the definition encompasses both the classical mean-variance preferences and the monotone expected utility. For utility functions finite on the real line, our class represents a minimal set containing simple strategies which also contains the optimizer, under conditions that are substantially milder than the celebrated reasonable asymptotic elasticity condition on the utility function . In particular, no condition on the behavior of the utility at -\infty is needed .
The choice of admissible trading strategies in mathematical modelling of financial markets is a delicate issue, going back to Harrison and Kreps (1979). In the context of optimal portfolio selection with expected utility preferences this question has been a focus of considerable attention over the last twenty years. We propose a novel notion of admissibility that has many pleasant features - admissibility is characterized purely under the objective measure ; the wealth of any admissible strategy is a supermartingale under all pricing measures; local boundedness of the price process is not required; neither strict monotonicity, strict concavity nor differentiability of the utility function are necessary; the definition encompasses both the classical mean-variance preferences and the monotone expected utility. For utility functions finite on the whole real line, our class represents a minimal set containing simple strategies which also contains the optimizer, under conditions that are milder than the celebrated reasonable asymptotic elasticity condition on the utility function .
[ { "type": "R", "before": "--", "after": "-", "start_char_pos": 393, "end_char_pos": 395 }, { "type": "D", "before": "P", "after": null, "start_char_pos": 462, "end_char_pos": 463 }, { "type": "A", "before": null, "after": "whole", "start_char_pos": 859, "end_char_pos": 859 }, { "type": "D", "before": "substantially", "after": null, "start_char_pos": 996, "end_char_pos": 1009 }, { "type": "D", "before": ". In particular, no condition on the behavior of the utility at -\\infty is needed", "after": null, "start_char_pos": 1104, "end_char_pos": 1185 } ]
[ 0, 152, 317, 465, 552, 608, 715, 822, 1105 ]
0910.5819
1
We investigate bisimulation equivalence on Petri nets under durational semantics. Our motivation was to verify the conjecture that in durational setting, the bisimulation equivalence checking problem becomes more tractable (which is the case, e.g., over communication-free nets). We disprove this conjecture in three of four proposed variants of durational semantics. The fourth case remains an interesting open problem.
We investigate bisimulation equivalence on Petri nets under durational semantics. Our motivation was to verify the conjecture that in durational setting, the bisimulation equivalence checking problem becomes more tractable than in ordinary setting (which is the case, e.g., over communication-free nets). We disprove this conjecture in three of four proposed variants of durational semantics. The fourth variant remains an intriguing open problem.
[ { "type": "A", "before": null, "after": "than in ordinary setting", "start_char_pos": 223, "end_char_pos": 223 }, { "type": "R", "before": "case remains an interesting", "after": "variant remains an intriguing", "start_char_pos": 380, "end_char_pos": 407 } ]
[ 0, 81, 280, 368 ]
0911.0113
1
Families of exact solutions are found for a nonlinear modification of the Black-Scholes equation. This risk-adjusted pricing methodology model (RAPM) incorporates both transaction costs and the risk from a volatile portfolio. Using the Lie group analysis we obtain the Lie algebra admitted by the RAPM equation. It gives us the possibility to describe an optimal system of subalgebras and correspondingly the set of invariant solutions to the model. On this way we can describe complete set of possible reductions of the nonlinear RAPM model. Reductions are given in form of different second order ordinary differential equations. In some cases they can be reduced as well to first oder equations . We discuss the property of these reductions and corresponding invariant solutions.
Families of exact solutions are found to a nonlinear modification of the Black-Scholes equation. This risk-adjusted pricing methodology model (RAPM) incorporates both transaction costs and the risk from a volatile portfolio. Using the Lie group analysis we obtain the Lie algebra admitted by the RAPM equation. It gives us the possibility to describe an optimal system of subalgebras and correspondingly the set of invariant solutions to the model. In this way we can describe the complete set of possible reductions of the nonlinear RAPM model. Reductions are given in the form of different second order ordinary differential equations. In all cases we provide solutions to these equations in an exact or parametric form . We discuss the properties of these reductions and the corresponding invariant solutions.
[ { "type": "R", "before": "for", "after": "to", "start_char_pos": 38, "end_char_pos": 41 }, { "type": "R", "before": "On", "after": "In", "start_char_pos": 450, "end_char_pos": 452 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 478, "end_char_pos": 478 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 568, "end_char_pos": 568 }, { "type": "R", "before": "some cases they can be reduced as well to first oder equations", "after": "all cases we provide solutions to these equations in an exact or parametric form", "start_char_pos": 636, "end_char_pos": 698 }, { "type": "R", "before": "property", "after": "properties", "start_char_pos": 716, "end_char_pos": 724 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 749, "end_char_pos": 749 } ]
[ 0, 97, 225, 311, 449, 543, 632, 700 ]
0911.0454
1
This is the first delivery of the Financial Bubble Experiment that our group has recently launched within the Financial Crisis Observatory (FCO) at ETH Zurich URL
On 2 November 2009, the Financial Bubble Experiment was launched within the Financial Crisis Observatory (FCO) at ETH Zurich URL In that initial report, we diagnosed and announced three bubbles on three different assets. In this latest release of 23 December 2009 in this ongoing experiment, we add a diagnostic of a new bubble developing on a fourth asset.
[ { "type": "R", "before": "This is the first delivery of the", "after": "On 2 November 2009, the", "start_char_pos": 0, "end_char_pos": 33 }, { "type": "R", "before": "that our group has recently", "after": "was", "start_char_pos": 62, "end_char_pos": 89 }, { "type": "A", "before": null, "after": "In that initial report, we diagnosed and announced three bubbles on three different assets. In this latest release of 23 December 2009 in this ongoing experiment, we add a diagnostic of a new bubble developing on a fourth asset.", "start_char_pos": 163, "end_char_pos": 163 } ]
[ 0 ]
0911.0562
1
We give a rigorous proof of the representation of implied volatility as a time-average of weighted expectations of local or stochastic volatility. With this proof we fix the problem of a circular definition in the original derivation of Gatheral, who introduced this implied volatility representation in his book 'The Volatility Surface'.
We give a new proof of the representation of implied volatility as a time-average of weighted expectations of local or stochastic volatility. With this proof we clarify the question of existence of 'forward implied variance' in the original derivation of Gatheral, who introduced this representation in his book 'The Volatility Surface'.
[ { "type": "R", "before": "rigorous", "after": "new", "start_char_pos": 10, "end_char_pos": 18 }, { "type": "R", "before": "fix the problem of a circular definition", "after": "clarify the question of existence of 'forward implied variance'", "start_char_pos": 166, "end_char_pos": 206 }, { "type": "D", "before": "implied volatility", "after": null, "start_char_pos": 267, "end_char_pos": 285 } ]
[ 0, 146 ]
0911.3117
1
An optimal investment problem is solved for an insider who has access to noisy information related to a future stock price, but who does not know the stock price drift. The drift is filtered from a combination of price observations and the privileged information, fusing a partial information scenario with enlargement of filtration techniques. We apply a variant of the Kalman-Bucy filter to infer a signal, given a combination of an observation process and some additional information. This converts the combined partial and inside information model to a full information model, and the associated investment problem for HARA utility is explicitly solved via duality methods. We consider the cases in which the agent has information on the terminal value of the Brownian motion driving the stock, and on the terminal stock price itself. Comparisons are drawn with the classical partial information case without insider knowledge. The parameter uncertainty results in stock price inside information being more valuable than Brownian information, and perfect knowledge of the future stock price leads to infinite additional utility. This is in contrast to the conventional case in which the stock drift is assumed known, in which perfect information of any kind leads to unbounded additional utility, since stock price information is then indistinguishable from Brownian information .
This paper has been withdrawn by the authors pending corrections .
[ { "type": "R", "before": "An optimal investment problem is solved for an insider who has access to noisy information related to a future stock price, but who does not know the stock price drift. The drift is filtered from a combination of price observations and the privileged information, fusing a partial information scenario with enlargement of filtration techniques. We apply a variant of the Kalman-Bucy filter to infer a signal, given a combination of an observation process and some additional information. This converts the combined partial and inside information model to a full information model, and the associated investment problem for HARA utility is explicitly solved via duality methods. We consider the cases in which the agent has information on the terminal value of the Brownian motion driving the stock, and on the terminal stock price itself. Comparisons are drawn with the classical partial information case without insider knowledge. The parameter uncertainty results in stock price inside information being more valuable than Brownian information, and perfect knowledge of the future stock price leads to infinite additional utility. This is in contrast to the conventional case in which the stock drift is assumed known, in which perfect information of any kind leads to unbounded additional utility, since stock price information is then indistinguishable from Brownian information", "after": "This paper has been withdrawn by the authors pending corrections", "start_char_pos": 0, "end_char_pos": 1382 } ]
[ 0, 168, 344, 487, 677, 838, 931, 1132 ]
0911.3802
1
We apply a Coupled Markov Chain approach to model rating transitions and thereby default probabilities of companies. We estimate parameters by applying a maximum likelihood estimation using a large set of historical ratings. Given the parameters the model can be used to simulate scenarios for joint rating changes of a set of companies, enabling the use of contemporary risk managementtechniques. We obtain scenarios for the payment streams generated by CDX contracts and portfolios of such contracts. This allows for assessing the risk of the current position held and design portfolios which are optimal relative to the risk preferences of the investor .
We propose a Markov chain model for credit rating changes. We do not use any distributional assumptions on the asset values of the rated companies but directly model the rating transitions process. The parameters of the model are estimated by a maximum likelihood approach using historical rating transitions and heuristic global optimization techniques. We benchmark the model against a GLMM model in the context of bond portfolio risk management. The proposed model yields stronger dependencies and higher risks than the GLMM model. As a result, the risk optimal portfolios are more conservative than the decisions resulting from the benchmark model .
[ { "type": "R", "before": "apply a Coupled Markov Chain approach to model rating transitions and thereby default probabilities of companies. We estimate parameters by applying", "after": "propose a Markov chain model for credit rating changes. We do not use any distributional assumptions on the asset values of the rated companies but directly model the rating transitions process. The parameters of the model are estimated by", "start_char_pos": 3, "end_char_pos": 151 }, { "type": "R", "before": "estimation using a large set of historical ratings. Given the parameters the model can be used to simulate scenarios for joint rating changes of a set of companies, enabling the use of contemporary risk managementtechniques. We obtain scenarios for the payment streams generated by CDX contracts and portfolios of such contracts. This allows for assessing the risk of the current position held and design portfolios which are optimal relative to the risk preferences of the investor", "after": "approach using historical rating transitions and heuristic global optimization techniques. We benchmark the model against a GLMM model in the context of bond portfolio risk management. The proposed model yields stronger dependencies and higher risks than the GLMM model. As a result, the risk optimal portfolios are more conservative than the decisions resulting from the benchmark model", "start_char_pos": 173, "end_char_pos": 655 } ]
[ 0, 116, 224, 397, 502 ]
0911.4871
1
There is little objective insight about various statistical aspects regarding the nature of occurrences of 3-10 and Pi-helices. Comprehensive set of reasons behind the existence of 3-10 and Pi-helices can only be obtained if the occurrence profile of these on the primary structure is unambiguously described from the perspectives of sequence, structure and evolution. Although studies about the compositional and energetic profile of 3-10 and Pi-helices aren't uncommon, merely that doesn't tell us why these (rather unstable) structures are found in the proteins at the first place. Considering all the non-redundant protein structures across all the major structural classes, the present study attempts to find the probabilistic distributions that describe several facets of the occurrence of these rare secondary structures in proteins. Structural causes for observing these statistical patterns are explained too. Probabilistic profiling of the occurrence of 3-10 and Pi-helices reveal their presence to follow Poisson flow on the sequence. With extensive analysis from varying standpoints we prove here that, such Poisson flows suggest the 3-10-helices and especially Pi-helices to be evidences of nature's mistakes on folding pathways. This hypothesis is further supported by results of critical evolutionary analysis on 20 major protein domain families, which reveal the definitive trend that proteins try to dispose these structures off during evolution, in favor of alpha-helices. Alongside these unexpected and significant results } , a new algorithm to differentiate between related sequences is proposed here that reliably studies evolutionary distance with respect to protein secondary structures.
Considering all available non-redundant protein structures across different structural classes, present study identified the probabilistic characteristics that describe several facets of the occurrence of 3(10) and Pi-helices in proteins. Occurrence profile of 3(10) and Pi-helices revealed that, their presence follows Poisson flow on the primary structure; implying that, their occurrence profile is rare, random and accidental. Structural class-specific statistical analyses of sequence intervals between consecutive occurrences of 3(10) and Pi-helices revealed that these could be best described by gamma and exponential distributions, across structural classes. Comparative study of normalized percentage of non-glycine and non-proline residues in 3(10), Pi and alpha-helices revealed a considerably higher proportion of 3(10) and Pi-helix residues in disallowed, generous and allowed regions of Ramachandran map. Probe into these findings in the light of evolution suggested clearly that 3(10) and Pi-helices should appropriately be viewed as evolutionary intermediates on long time scale, for not only the \alpha}-helical conformation but also for the 'turns', equiprobably. Hence, accidental and random nature of occurrences of 3(10) and Pi-helices, and their evolutionary non-conservation, could be described and explained from an invariant quantitative framework. Extent of correctness of two previously proposed hypotheses on 3(10) and Pi-helices, have been investigated too. Alongside these , a new algorithm to differentiate between related sequences is proposed , which reliably studies evolutionary distance with respect to protein secondary structures.
[ { "type": "R", "before": "There is little objective insight about various statistical aspects regarding the nature of occurrences of 3-10 and Pi-helices. Comprehensive set of reasons behind the existence of 3-10 and Pi-helices can only be obtained if the occurrence profile of these on the primary structure is unambiguously described from the perspectives of sequence, structure and evolution. Although studies about the compositional and energetic profile of 3-10 and Pi-helices aren't uncommon, merely that doesn't tell us why these (rather unstable) structures are found in the proteins at the first place. Considering all the", "after": "Considering all available", "start_char_pos": 0, "end_char_pos": 604 }, { "type": "R", "before": "all the major", "after": "different", "start_char_pos": 645, "end_char_pos": 658 }, { "type": "R", "before": "the present study attempts to find the probabilistic distributions", "after": "present study identified the probabilistic characteristics", "start_char_pos": 679, "end_char_pos": 745 }, { "type": "R", "before": "these rare secondary structures", "after": "3(10) and Pi-helices", "start_char_pos": 796, "end_char_pos": 827 }, { "type": "R", "before": "Structural causes for observing these statistical patterns are explained too. Probabilistic profiling of the occurrence of 3-10", "after": "Occurrence profile of 3(10)", "start_char_pos": 841, "end_char_pos": 968 }, { "type": "R", "before": "reveal their presence to follow", "after": "revealed that, their presence follows", "start_char_pos": 984, "end_char_pos": 1015 }, { "type": "R", "before": "sequence. With extensive analysis from varying standpoints we prove here that, such Poisson flows suggest the 3-10-helices and especially", "after": "primary structure; implying that, their occurrence profile is rare, random and accidental. Structural class-specific statistical analyses of sequence intervals between consecutive occurrences of 3(10) and", "start_char_pos": 1036, "end_char_pos": 1173 }, { "type": "R", "before": "to be evidences of nature's mistakes on folding pathways. This hypothesis is further supported by results of critical evolutionary analysis on 20 major protein domain families, which reveal the definitive trend that proteins try to dispose these structures off during evolution, in favor of alpha-helices. Alongside these unexpected and significant results", "after": "revealed that these could be best described by gamma and exponential distributions, across structural classes. Comparative study of normalized percentage of non-glycine and non-proline residues in 3(10), Pi and alpha-helices revealed a considerably higher proportion of 3(10) and Pi-helix residues in disallowed, generous and allowed regions of Ramachandran map. Probe into these findings in the light of evolution suggested clearly that 3(10) and Pi-helices should appropriately be viewed as evolutionary intermediates on long time scale, for not only the", "start_char_pos": 1185, "end_char_pos": 1541 }, { "type": "A", "before": null, "after": "\\alpha", "start_char_pos": 1542, "end_char_pos": 1542 }, { "type": "A", "before": null, "after": "-helical conformation but also for the 'turns', equiprobably. Hence, accidental and random nature of occurrences of 3(10) and Pi-helices, and their evolutionary non-conservation, could be described and explained from an invariant quantitative framework. Extent of correctness of two previously proposed hypotheses on 3(10) and Pi-helices, have been investigated too. Alongside these", "start_char_pos": 1543, "end_char_pos": 1543 }, { "type": "R", "before": "here that", "after": ", which", "start_char_pos": 1617, "end_char_pos": 1626 } ]
[ 0, 127, 368, 584, 840, 918, 1045, 1242, 1490 ]
0911.5117
1
We analyze the regularity of the optimal exercise boundary for the American Put option when the underlying asset pays a discrete dividend at a known time t_d during the lifetime of the option. The ex-dividend asset price process is assumed to follow Black-Scholes dynamics and the dividend amount is a deterministic function of the ex-dividend asset price just before the dividend date. The solution to the associated optimal stopping problem can be characterised in terms of an optimal exercise boundary which, in contrast to the case when there are no dividends, is no longer monotone. In this paper we prove that when the dividend function is positive and concave, then the boundary tends to 0 as time tends to t_d^- and is non-increasing in a left-hand neighbourhood of t_d. We also show that the exercise boundary is continuous and a high contact principle holds in such a neighbourhood when the dividend function is moreover linearin a neighbourhood of 0.
We analyze the regularity of the optimal exercise boundary for the American Put option when the underlying asset pays a discrete dividend at a known time t_d during the lifetime of the option. The ex-dividend asset price process is assumed to follow Black-Scholes dynamics and the dividend amount is a deterministic function of the ex-dividend asset price just before the dividend date. The solution to the associated optimal stopping problem can be characterised in terms of an optimal exercise boundary which, in contrast to the case when there are no dividends, may no longer be monotone. In this paper we prove that when the dividend function is positive and concave, then the boundary is non-increasing in a left-hand neighbourhood of t_d, and tends to 0 as time tends to t_d^- with a speed that we can characterize. When the dividend function is linear in a neighbourhood of zero, then we show continuity of the exercise boundary and a high contact principle in the left-hand neighbourhood of t_d. When it is globally linear, then right-continuity of the boundary and the high contact principle are proved to hold globally. Finally, we show how all the previous results can be extended to multiple dividend payment dates in that case.
[ { "type": "R", "before": "is no longer", "after": "may no longer be", "start_char_pos": 565, "end_char_pos": 577 }, { "type": "A", "before": null, "after": "is non-increasing in a left-hand neighbourhood of t_d, and", "start_char_pos": 686, "end_char_pos": 686 }, { "type": "R", "before": "and is non-increasing in a left-hand neighbourhood of t_d. We also show that", "after": "with a speed that we can characterize. When the dividend function is linear in a neighbourhood of zero, then we show continuity of", "start_char_pos": 721, "end_char_pos": 797 }, { "type": "D", "before": "is continuous", "after": null, "start_char_pos": 820, "end_char_pos": 833 }, { "type": "R", "before": "holds in such a neighbourhood when the dividend function is moreover linearin a neighbourhood of 0.", "after": "in the left-hand neighbourhood of t_d. When it is globally linear, then right-continuity of the boundary and the high contact principle are proved to hold globally. Finally, we show how all the previous results can be extended to multiple dividend payment dates in that case.", "start_char_pos": 863, "end_char_pos": 962 } ]
[ 0, 192, 386, 587, 779 ]
0912.0122
1
We study the evolution of quantum entanglement during exciton energy transfer (EET) in a network model of the Fenna-Matthews-Olson (FMO) complex, a biological pigment-protein complex involved in the early steps of photosynthesis in sulphur bacteria. The influence of Markovian, as well as spatially and temporally correlated (non-Markovian) noise on the generation of entanglement across distinct chromophores (site entanglement) and different excitons (mode entanglement) is studied for different injection mechanisms, like thermal and coherent laser excitation. Additionally, we study the entangling power of the FMO complex under natural operating conditions. While quantum information processing tends to favor maximal entanglement, near unit EET is achieved when the initial part of the evolution displays intermediate values of both forms of entanglement which is the result of an intricate interplay between coherent and noisy processes in these complex systems .
We study the evolution of quantum entanglement during exciton energy transfer (EET) in a network model of the Fenna-Matthews-Olson (FMO) complex, a biological pigment-protein complex involved in the early steps of photosynthesis in sulphur bacteria. The influence of Markovian, as well as spatially and temporally correlated (non-Markovian) noise on the generation of entanglement across distinct chromophores (site entanglement) and different excitonic eigenstates (mode entanglement) is studied for different injection mechanisms, including thermal and coherent laser excitation. Additionally, we study the entangling power of the FMO complex under natural operating conditions. While quantum information processing tends to favor maximal entanglement, near unit EET is achieved as the result of an intricate interplay between coherent and noisy processes where the initial part of the evolution displays intermediate values of both forms of entanglement .
[ { "type": "R", "before": "excitons", "after": "excitonic eigenstates", "start_char_pos": 444, "end_char_pos": 452 }, { "type": "R", "before": "like", "after": "including", "start_char_pos": 520, "end_char_pos": 524 }, { "type": "R", "before": "when the", "after": "as the result of an intricate interplay between coherent and noisy processes where the", "start_char_pos": 763, "end_char_pos": 771 }, { "type": "D", "before": "which is the result of an intricate interplay between coherent and noisy processes in these complex systems", "after": null, "start_char_pos": 861, "end_char_pos": 968 } ]
[ 0, 249, 563, 662 ]
0912.0161
1
Network communities help the URLanization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, was rather difficult. Here we introduce the novel concept of community landscapes and ModuLand, an integrative framework determining overlapping network modules as hills of the community landscape and including several widely used modularization methods as special cases. The current implementations of the community landscape concept provide a fast analysis of weighted and directed networks, and (1) determine overlapping modules with a high resolution; (2) uncover a hierarchical network structure in previously unprecedented details allowing a fast , zoom-in analysis of large networks; (3) allow the determination of key network elements and (4) help to predict network dynamics. The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction.
Background: Network communities help the URLanization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, has proven to be rather difficult. Methodology/Principal Findings: Here we introduce the novel concept of ModuLand, an integrative method family determining overlapping network modules as hills of an influence function-based, centrality-type community landscape, and including several widely used modularization methods as special cases. As various adaptations of the method family, we developed several algorithms, which provide an efficient analysis of weighted and directed networks, and (1) determine pervasively overlapping modules with high resolution; (2) uncover a detailed hierarchical network structure allowing an efficient , zoom-in analysis of large networks; (3) allow the determination of key network nodes and (4) help to predict network dynamics. Conclusions/Significance: The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction.
[ { "type": "A", "before": null, "after": "Background:", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "was", "after": "has proven to be", "start_char_pos": 218, "end_char_pos": 221 }, { "type": "A", "before": null, "after": "Methodology/Principal Findings:", "start_char_pos": 240, "end_char_pos": 240 }, { "type": "D", "before": "community landscapes and", "after": null, "start_char_pos": 280, "end_char_pos": 304 }, { "type": "R", "before": "framework", "after": "method family", "start_char_pos": 330, "end_char_pos": 339 }, { "type": "R", "before": "the community landscape", "after": "an influence function-based, centrality-type community landscape,", "start_char_pos": 392, "end_char_pos": 415 }, { "type": "R", "before": "The current implementations of the community landscape concept provide a fast", "after": "As various adaptations of the method family, we developed several algorithms, which provide an efficient", "start_char_pos": 491, "end_char_pos": 568 }, { "type": "A", "before": null, "after": "pervasively", "start_char_pos": 631, "end_char_pos": 631 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 657, "end_char_pos": 658 }, { "type": "A", "before": null, "after": "detailed", "start_char_pos": 690, "end_char_pos": 690 }, { "type": "R", "before": "in previously unprecedented details allowing a fast", "after": "allowing an efficient", "start_char_pos": 722, "end_char_pos": 773 }, { "type": "R", "before": "elements", "after": "nodes", "start_char_pos": 855, "end_char_pos": 863 }, { "type": "A", "before": null, "after": "Conclusions/Significance:", "start_char_pos": 906, "end_char_pos": 906 } ]
[ 0, 77, 239, 490, 675, 811, 905 ]
0912.0857
1
We explore what causes business cycles by analyzing the Japanese industrial production data. The methods are spectral analysis and factor analysis. Using the random matrix theory, we show that two largest eigenvalues are significant , and identify the first dominant factor as the aggregate demand, and the second factor as inventory adjustment. They cannot be reasonably interpreted as technological shocks. We demonstrate that in terms of two dominant factors, shipments lead production by four months. Furthermore, out-of-sample test demonstrates that the model stands the 2008-09 recession. Because a fall of output during 2008-09 was caused by an exogenous drop in exports, it provides another justification for identifying the first dominant factor as the aggregate demand. All the findings suggest that the major cause of business cycles is real demand shocks.
We explore what causes business cycles by analyzing the Japanese industrial production data. The methods are spectral analysis and factor analysis. Using the random matrix theory, we show that two largest eigenvalues are significant . Taking advantage of the information revealed by disaggregated data, we identify the first dominant factor as the aggregate demand, and the second factor as inventory adjustment. They cannot be reasonably interpreted as technological shocks. We also demonstrate that in terms of two dominant factors, shipments lead production by four months. Furthermore, out-of-sample test demonstrates that the model holds up even under the 2008-09 recession. Because a fall of output during 2008-09 was caused by an exogenous drop in exports, it provides another justification for identifying the first dominant factor as the aggregate demand. All the findings suggest that the major cause of business cycles is real demand shocks.
[ { "type": "R", "before": ", and", "after": ". Taking advantage of the information revealed by disaggregated data, we", "start_char_pos": 233, "end_char_pos": 238 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 412, "end_char_pos": 412 }, { "type": "R", "before": "stands", "after": "holds up even under", "start_char_pos": 566, "end_char_pos": 572 } ]
[ 0, 92, 147, 345, 408, 505, 595, 780 ]
0912.1396
1
Decision making in the presence of randomness is an important problem, particularly in finance. Often, decision makers base their choices on the values of `risk measures' or `nonlinear expectations'; it is important to understand how these decisions evolve through time. In this paper, we consider how these decisions are affected by the use of a moving horizon, and the possible inconsistencies that this creates. By giving a formal treatment of time consistency without Bellman's equations, we show that there is a new sense in which these decisions can be seen as consistent.
We consider portfolio selection when decisions based on a dynamic risk measure are affected by the use of a moving horizon, and the possible inconsistencies that this creates. By giving a formal treatment of time consistency which is independent of Bellman's equations, we show that there is a new sense in which these decisions can be seen as consistent.
[ { "type": "R", "before": "Decision making in the presence of randomness is an important problem, particularly in finance. Often, decision makers base their choices on the values of `risk measures' or `nonlinear expectations'; it is important to understand how these decisions evolve through time. In this paper, we consider how these decisions", "after": "We consider portfolio selection when decisions based on a dynamic risk measure", "start_char_pos": 0, "end_char_pos": 317 }, { "type": "R", "before": "without", "after": "which is independent of", "start_char_pos": 464, "end_char_pos": 471 } ]
[ 0, 95, 199, 270, 414 ]
0912.1548
1
Apoptosis is a highly regulated cell death mechanism involved in many physiological processes. One of the key components of extrinsically activated apoptosis is the death receptor Fas, which, on binding to its cognate ligand , oligomerize to form the death-inducing signaling complex , a pivotal trigger of apoptosis . Motivated by recent experimental data demonstrating the capacity of Fas to self-stabilize in their signaling forms , we propose a mathematical model of death ligand-receptor interaction that exhibits hysteresis. This provides an upstream mechanism for bistability in apoptosis, which is seen to be a consequence of biologically observed receptor trimerization. We analyze the bistabilitythresholds of the model, which furthermore possesses robustness of bistability , and provide a model assessment criterion using tools from algebraic geometry . Our results strongly suggest a role for Fas and other death receptors in generating robust threshold switching between coherent life and death states. Discussion includes an analogy with ferromagnetism and the generalization of self-stabilization to other apoptotic complexes such as the apoptosome .
Apoptosis is a highly regulated cell death mechanism involved in many physiological processes. A key component of extrinsically activated apoptosis is the death receptor Fas, which, on binding to its cognate ligand FasL , oligomerize to form the death-inducing signaling complex . Motivated by recent experimental data , we propose a mathematical model of death ligand-receptor dynamics where FasL acts as a clustering agent for Fas, which form locally stable signaling platforms through proximity-induced receptor interactions. Significantly, the model exhibits hysteresis, providing an upstream mechanism for bistability and robustness. At low receptor concentrations, the bistability is contingent on the trimerism of FasL. Moreover, irreversible bistability , representing a committed cell death decision, emerges at high concentrations, which may be achieved through receptor pre-association or localization onto membrane lipid rafts. Thus, our model provides a novel theory for these observed biological phenomena within the unified context of bistability. Importantly, as Fas interactions initiate the extrinsic apoptotic pathway, our model also suggests a mechanism by which cells may function as bistable life/death switches independently of any such dynamics in their downstream components . Our results highlight the role of death receptors in deciding cell fate and add to the signal processing capabilities attributed to receptor clustering .
[ { "type": "R", "before": "One of the key components", "after": "A key component", "start_char_pos": 95, "end_char_pos": 120 }, { "type": "A", "before": null, "after": "FasL", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "D", "before": ", a pivotal trigger of apoptosis", "after": null, "start_char_pos": 285, "end_char_pos": 317 }, { "type": "D", "before": "demonstrating the capacity of Fas to self-stabilize in their signaling forms", "after": null, "start_char_pos": 358, "end_char_pos": 434 }, { "type": "R", "before": "interaction that exhibits hysteresis. This provides", "after": "dynamics where FasL acts as a clustering agent for Fas, which form locally stable signaling platforms through proximity-induced receptor interactions. Significantly, the model exhibits hysteresis, providing", "start_char_pos": 494, "end_char_pos": 545 }, { "type": "R", "before": "in apoptosis, which is seen to be a consequence of biologically observed receptor trimerization. We analyze the bistabilitythresholds of the model, which furthermore possesses robustness of bistability", "after": "and robustness. At low receptor concentrations, the bistability is contingent on the trimerism of FasL. Moreover, irreversible bistability", "start_char_pos": 584, "end_char_pos": 785 }, { "type": "R", "before": "and provide a model assessment criterion using tools from algebraic geometry", "after": "representing a committed cell death decision, emerges at high concentrations, which may be achieved through receptor pre-association or localization onto membrane lipid rafts. Thus, our model provides a novel theory for these observed biological phenomena within the unified context of bistability. Importantly, as Fas interactions initiate the extrinsic apoptotic pathway, our model also suggests a mechanism by which cells may function as bistable life/death switches independently of any such dynamics in their downstream components", "start_char_pos": 788, "end_char_pos": 864 }, { "type": "R", "before": "strongly suggest a role for Fas and other", "after": "highlight the role of", "start_char_pos": 879, "end_char_pos": 920 }, { "type": "R", "before": "generating robust threshold switching between coherent life and death states. Discussion includes an analogy with ferromagnetism and the generalization of self-stabilization to other apoptotic complexes such as the apoptosome", "after": "deciding cell fate and add to the signal processing capabilities attributed to receptor clustering", "start_char_pos": 940, "end_char_pos": 1165 } ]
[ 0, 94, 319, 531, 680, 866, 1017 ]
0912.1985
1
The fluctuation-dissipation theory is invoked to shed light on input-output industrial correlations at a macroscopic level ; it is applied to the IIP (indices of industrial production) data in Japan. The random matrix theory is adopted to eliminate statistical noises out of the original data. The genuine correlation matrixfor the IIP thus obtained elucidates interindustry correlations in a statistically meaningful way . Our previous paper has demonstrated that the noise elimination procedure was successful in identifying existence of intrinsic business cycles. Here our particular attention is given to relationship between final demand and intermediate production . Also we observe distinctive external stimuli on the Japanese economy exerted by the current global economic crisis. Those stimuli are derived from residual of moving-averaged fluctuations of the IIP left over after subtracting the long-period components due to the business cycles .
In this study, the fluctuation-dissipation theory is invoked to shed light on input-output interindustrial relations at a macroscopic level by its application to IIP (indices of industrial production) data for Japan. Statistical noise arising from finiteness of the time series data is carefully removed by making use of the random matrix theory in an eigenvalue analysis of the correlation matrix; as a result, two dominant eigenmodes are detected . Our previous study successfully used these two modes to demonstrate the existence of intrinsic business cycles. Here a correlation matrix constructed from the two modes describes genuine interindustrial correlations in a statistically meaningful way. Further it enables us to quantitatively discuss the relationship between shipments of final demand goods and production of intermediate goods in a linear response framework. We also investigate distinctive external stimuli for the Japanese economy exerted by the current global economic crisis. These stimuli are derived from residuals of moving average fluctuations of the IIP remaining after subtracting the long-period components arising from inherent business cycles. The observation reveals that the fluctuation-dissipation theory is applicable to an economic system that is supposed to be far from physical equilibrium .
[ { "type": "R", "before": "The", "after": "In this study, the", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "industrial correlations", "after": "interindustrial relations", "start_char_pos": 76, "end_char_pos": 99 }, { "type": "R", "before": "; it is applied to the", "after": "by its application to", "start_char_pos": 123, "end_char_pos": 145 }, { "type": "R", "before": "in Japan. The", "after": "for Japan. Statistical noise arising from finiteness of the time series data is carefully removed by making use of the", "start_char_pos": 190, "end_char_pos": 203 }, { "type": "R", "before": "is adopted to eliminate statistical noises out of the original data. The genuine correlation matrixfor the IIP thus obtained elucidates interindustry correlations in a statistically meaningful way", "after": "in an eigenvalue analysis of the correlation matrix; as a result, two dominant eigenmodes are detected", "start_char_pos": 225, "end_char_pos": 421 }, { "type": "R", "before": "paper has demonstrated that the noise elimination procedure was successful in identifying", "after": "study successfully used these two modes to demonstrate the", "start_char_pos": 437, "end_char_pos": 526 }, { "type": "R", "before": "our particular attention is given to relationship between final demand and intermediate production . Also we observe", "after": "a correlation matrix constructed from the two modes describes genuine interindustrial correlations in a statistically meaningful way. Further it enables us to quantitatively discuss the relationship between shipments of final demand goods and production of intermediate goods in a linear response framework. We also investigate", "start_char_pos": 572, "end_char_pos": 688 }, { "type": "R", "before": "on", "after": "for", "start_char_pos": 718, "end_char_pos": 720 }, { "type": "R", "before": "Those", "after": "These", "start_char_pos": 789, "end_char_pos": 794 }, { "type": "R", "before": "residual of moving-averaged", "after": "residuals of moving average", "start_char_pos": 820, "end_char_pos": 847 }, { "type": "R", "before": "left over", "after": "remaining", "start_char_pos": 872, "end_char_pos": 881 }, { "type": "R", "before": "due to the business cycles", "after": "arising from inherent business cycles. The observation reveals that the fluctuation-dissipation theory is applicable to an economic system that is supposed to be far from physical equilibrium", "start_char_pos": 927, "end_char_pos": 953 } ]
[ 0, 124, 199, 293, 423, 566, 672, 788 ]
0912.2595
1
In equity and foreign exchange markets the risk-neutral dynamics of the underlying asset are commonly represented by a stochastic volatility model with jumps. In this paper we consider a dense subclass of such models and develop analytically tractable formulae for the prices of a range of first-generation exotic derivatives. We provide closed form formulae for the Fourier transforms of vanilla and forward starting options as well as the formula for the slope of the implied volatility smile for large strikes. A simple explicit approximation formula for the variance swap price is given. The prices of the volatility swaps and other volatility derivatives are given as a one-dimensional integral of an explicit function. Analytically tractable formulae for the Laplace transform (in maturity) of the double-no-touch options and the Fourier-Laplace transform (in strike and maturity) of the double knock-out call and put options are obtained. The proof of the latter results is based on a novel complex matrix Wiener-Hopf factorization .
In equity and foreign exchange markets the risk-neutral dynamics of the underlying asset are commonly represented by stochastic volatility models with jumps. In this paper we consider a dense subclass of such models and develop analytically tractable formulae for the prices of a range of first-generation exotic derivatives. We provide closed form formulae for the Fourier transforms of vanilla and forward starting option prices as well as a formula for the slope of the implied volatility smile for large strikes. A simple explicit approximation formula for the variance swap price is given. The prices of volatility swaps and other volatility derivatives are given as a one-dimensional integral of an explicit function. Analytically tractable formulae for the Laplace transform (in maturity) of the double-no-touch options and the Fourier-Laplace transform (in strike and maturity) of the double knock-out call and put options are obtained. The proof of the latter formulae is based on extended matrix Wiener-Hopf factorisation results. We also provide convergence results .
[ { "type": "R", "before": "a stochastic volatility model", "after": "stochastic volatility models", "start_char_pos": 117, "end_char_pos": 146 }, { "type": "R", "before": "options", "after": "option prices", "start_char_pos": 418, "end_char_pos": 425 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 437, "end_char_pos": 440 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 606, "end_char_pos": 609 }, { "type": "R", "before": "results", "after": "formulae", "start_char_pos": 970, "end_char_pos": 977 }, { "type": "R", "before": "a novel complex", "after": "extended", "start_char_pos": 990, "end_char_pos": 1005 }, { "type": "R", "before": "factorization", "after": "factorisation results. We also provide convergence results", "start_char_pos": 1025, "end_char_pos": 1038 } ]
[ 0, 158, 326, 513, 591, 724, 945 ]
0912.3595
1
The problem of DNA-DNA interaction mediated by divalent counterions is studied using computer simulation. We show that if DNA configurational entropy is restricted, divalent counterions can cause DNA reentrant condensation similar to that caused by tri- or tetra-valent counterions. DNA-DNA interaction is strongly repulsive at small or large counterion concentration and is negligible or slightly attractive for a concentration in between. The result agrees well with various known expermental setups where DNA molecules are strongly constrained (no configurational entropy) .
The problem of DNA-DNA interaction mediated by divalent counterions is studied using computer simulation. Although divalent counterions cannot condense free DNA molecules in solution, we show that if DNA configurational entropy is restricted, divalent counterions can cause DNA reentrant condensation similar to that caused by tri- or tetra-valent counterions. DNA-DNA interaction is strongly repulsive at small or large counterion concentration and is negligible or slightly attractive for a concentration in between. Implications of our results to experiments of DNA ejection from bacteriophages are discussed. The quantitative result serves to understand electrostatic effects in other experiments involving DNA and divalent counterions .
[ { "type": "R", "before": "We", "after": "Although divalent counterions cannot condense free DNA molecules in solution, we", "start_char_pos": 106, "end_char_pos": 108 }, { "type": "R", "before": "The result agrees well with various known expermental setups where DNA molecules are strongly constrained (no configurational entropy)", "after": "Implications of our results to experiments of DNA ejection from bacteriophages are discussed. The quantitative result serves to understand electrostatic effects in other experiments involving DNA and divalent counterions", "start_char_pos": 441, "end_char_pos": 575 } ]
[ 0, 105, 282, 440 ]
0912.4312
1
We build a general model for pricing defaultable claims. In addition to the usual absence of arbitrage assumption, we assume that one defaultable asset (at least) looses value when the default occurs. We prove that under this assumption, in some standard market filtrations, default times are totally inaccessible stopping times; we therefore proceed to a systematic construction of default times with particular emphasis on totally inaccessible stopping times. Surprisingly, this abstract mathematical construction, reveals a very specific and useful way in which default models can be built, using both market factors and idiosyncratic factors. We then provide all the relevant characteristics of a default time (i.e. the Az\'ema supermartingale and its Doob-Meyer decomposition) given the information about these factors. We also provide explicit formulas for the prices of defaultable claims and analyze the risk premiums that form in the market in anticipation of losses which occur at the default event. The usual reduced-form framework is extended in order to include possible economic shocks, in particular jumps in the recoveries at the default time. This formulas are not classic and we point out that the knowledge of the default compensator or the intensity process is not anymore a sufficient quantity for finding explicit prices, but we need indeed the Az\'ema supermartingale and its Doob-Meyer decomposition.
We build a general model for pricing defaultable claims. In addition to the usual absence of arbitrage assumption, we assume that one defaultable asset (at least) looses value when the default occurs. We prove that under this assumption, in some standard market filtrations, default times are totally inaccessible stopping times; we therefore proceed to a systematic construction of default times with particular emphasis on totally inaccessible stopping times. Surprisingly, this abstract mathematical construction, reveals a very specific and useful way in which default models can be built, using both market factors and idiosyncratic factors. We then provide all the relevant characteristics of a default time (i.e. the Az\'ema supermartingale and its Doob-Meyer decomposition) given the information about these factors. We also provide explicit formulas for the prices of defaultable claims and analyze the risk premiums that form in the market in anticipation of losses which occur at the default event. The usual reduced-form framework is extended in order to include possible economic shocks, in particular jumps of the recovery process at the default time. This formulas are not classic and we point out that the knowledge of the default compensator or the intensity process is not anymore a sufficient quantity for finding explicit prices, but we need indeed the Az\'ema supermartingale and its Doob-Meyer decomposition.
[ { "type": "R", "before": "in the recoveries", "after": "of the recovery process", "start_char_pos": 1121, "end_char_pos": 1138 } ]
[ 0, 56, 200, 329, 461, 646, 824, 1009, 1159 ]
0912.4465
1
Post-translational modifications of the histone proteins in the chromosomes are an important factor in epigenetic control as specific regions of DNA in the chromatin are expressed , depending on the particular modification states of the histone proteins. We study the stochastic dynamics of histone protein states, taking into account a non-local feedback mechanism where modified nucleosomes recruit enzymes that diffuse to nearby nucleosomes. We formulate the master equation as a quantum many-body variational problem and employ a Hartree ansatz to obtain a system of coupled nonlinear difference equations . Multiple stable histone states appear in a parameter regime whose size increases with increasing number of modification sites , and multistability is possible even if the non-local feedback term is weak compared to local processes. Increasing the number of independent modification sites exponentially increases the number of stable histone states. We discuss the role of the spatial dependence due to the non-local feedback mechanism , and we consider the effects of spatially heterogeneous enzymatic activity .
Post-translational modifications of histone proteins are an important factor in epigenetic control that serve to regulate transcription , depending on the particular modification states of the histone proteins. We study the stochastic dynamics of histone protein states, taking into account a feedback mechanism where modified nucleosomes recruit enzymes that diffuse to adjacent nucleosomes. We map the system onto a quantum spin system whose dynamics is generated by a non-Hermitian Hamiltonian. Making an ansatz for the solution as a tensor product state leads to nonlinear partial differential equations that describe the dynamics of the system . Multiple stable histone states appear in a parameter regime whose size increases with increasing number of modification sites . We discuss the role of the spatial dependance , and we consider the effects of spatially heterogeneous enzymatic activity . Finally, we consider multistability in a model of several types of correlated post-translational modifications .
[ { "type": "R", "before": "the histone proteins in the chromosomes", "after": "histone proteins", "start_char_pos": 36, "end_char_pos": 75 }, { "type": "R", "before": "as specific regions of DNA in the chromatin are expressed", "after": "that serve to regulate transcription", "start_char_pos": 122, "end_char_pos": 179 }, { "type": "D", "before": "non-local", "after": null, "start_char_pos": 337, "end_char_pos": 346 }, { "type": "R", "before": "nearby", "after": "adjacent", "start_char_pos": 425, "end_char_pos": 431 }, { "type": "R", "before": "formulate the master equation as a quantum many-body variational problem and employ a Hartree ansatz to obtain a system of coupled nonlinear difference equations", "after": "map the system onto a quantum spin system whose dynamics is generated by a non-Hermitian Hamiltonian. Making an ansatz for the solution as a tensor product state leads to nonlinear partial differential equations that describe the dynamics of the system", "start_char_pos": 448, "end_char_pos": 609 }, { "type": "R", "before": ", and multistability is possible even if the non-local feedback term is weak compared to local processes. Increasing the number of independent modification sites exponentially increases the number of stable histone states.", "after": ".", "start_char_pos": 738, "end_char_pos": 960 }, { "type": "R", "before": "dependence due to the non-local feedback mechanism", "after": "dependance", "start_char_pos": 996, "end_char_pos": 1046 }, { "type": "A", "before": null, "after": ". Finally, we consider multistability in a model of several types of correlated post-translational modifications", "start_char_pos": 1123, "end_char_pos": 1123 } ]
[ 0, 254, 444, 611, 843, 960 ]
0912.4533
1
In the paper "On Truncated Variation of Brownian Motion with Drift" (Bull. Pol. Acad. Sci. Math. 56 (2008), no.4, 267 - 281) we defined truncated variation of Brownian motion with drift, W_t = B_t + \mu t, t\geq 0, where (B_t) is a standard Brownian motion. For positive c we define two related quantities - upward truncated variation UTV^c_{\mu[a,b]=\sup_n a \leq t_1 < s_1 < ... < t_n < s_n \leq b i=1^{n} \max }%DIFDELCMD < {%%% W_{s_i and , analogously, downward truncated variation DTV^c_{\mu[a,b]=\sup_n a \leq t_1 < s_1 < ... < t_n < s_n \leq b i=1^{n} \max }%DIFDELCMD < {%%% W_{t_i We prove that exponential moments of the above quantities are finite (in opposite to the regular variation, corresponding to TV^0, which is infinite almost surely). We present estimates of the expected value of \% UTV^c up to universal constants. As an application we give some estimates of the maximal possible gain from trading a financial asset in the presence of flat commission (proportional to the value of the transaction) when the dynamics of the prices of the asset follows a geometric Browniam motion process. In the presented estimates upward truncated variation appears naturally .
In the paper "On Truncated Variation of Brownian Motion with Drift" (Bull. Pol. Acad. Sci. Math. 56 (2008), no.4, 267 - 281) we defined truncated variation of Brownian motion with drift, W_t = B_t + \mu t, t\geq 0, where (B_t) is a standard Brownian motion. Truncated variation differs from regular variation by neglecting jumps smaller than some fixed c > 0. We prove that truncated variation is a random variable with finite moment-generating function for any complex argument. We also define two closely related quantities - upward truncated variation [a,b]=\sup_n a \leq t_1 < s_1 < ... < t_n < s_n \leq b i=1^{n} \max }%DIFDELCMD < {%%% and downward truncated variation [a,b]=\sup_n a \leq t_1 < s_1 < ... < t_n < s_n \leq b i=1^{n} \max }%DIFDELCMD < {%%% . The defined quantities may have some interpretation in financial mathematics. Exponential moment of upward truncated variation may be interpreted as the maximal possible return from trading a financial asset in the presence of flat commission when the dynamics of the prices of the asset follows a geometric Brownian motion process. We calculate the Laplace transform with respect to time parameter of the moment-generating functions of the upward and downward truncated variations. As an application of the obtained formula we give an exact formula for expected value of upward and downward truncated variations. We give also exact (up to universal constants) estimates of the expected values of the mentioned quantities .
[ { "type": "R", "before": "For positive c we define two", "after": "Truncated variation differs from regular variation by neglecting jumps smaller than some fixed c > 0. We prove that truncated variation is a random variable with finite moment-generating function for any complex argument. We also define two closely", "start_char_pos": 258, "end_char_pos": 286 }, { "type": "D", "before": "UTV^c_{\\mu", "after": null, "start_char_pos": 335, "end_char_pos": 345 }, { "type": "D", "before": "W_{s_i", "after": null, "start_char_pos": 432, "end_char_pos": 438 }, { "type": "R", "before": "and , analogously,", "after": "and", "start_char_pos": 439, "end_char_pos": 457 }, { "type": "D", "before": "DTV^c_{\\mu", "after": null, "start_char_pos": 487, "end_char_pos": 497 }, { "type": "D", "before": "W_{t_i", "after": null, "start_char_pos": 584, "end_char_pos": 590 }, { "type": "R", "before": "We prove that exponential moments of the above quantities are finite (in opposite to the regular variation, corresponding to TV^0, which is infinite almost surely). We present estimates of the expected value of \\% UTV^c up to universal constants. As an application we give some estimates of the maximal possible gain", "after": ". The defined quantities may have some interpretation in financial mathematics. Exponential moment of upward truncated variation may be interpreted as the maximal possible return", "start_char_pos": 591, "end_char_pos": 907 }, { "type": "D", "before": "(proportional to the value of the transaction)", "after": null, "start_char_pos": 974, "end_char_pos": 1020 }, { "type": "R", "before": "Browniam", "after": "Brownian", "start_char_pos": 1086, "end_char_pos": 1094 }, { "type": "R", "before": "In the presented estimates upward truncated variation appears naturally", "after": "We calculate the Laplace transform with respect to time parameter of the moment-generating functions of the upward and downward truncated variations. As an application of the obtained formula we give an exact formula for expected value of upward and downward truncated variations. We give also exact (up to universal constants) estimates of the expected values of the mentioned quantities", "start_char_pos": 1111, "end_char_pos": 1182 } ]
[ 0, 79, 257, 380, 532, 755, 837, 1110 ]
0912.4710
1
Biochemical computing is an emerging field of unconventional computing that attempts to process information with biomolecules and biological objects using Boolean logic. In this work electrode-immobilized glucose-6-phosphate dehydrogenase enzyme catalyzed a reaction which carries out the Boolean AND logic gate. We report the first experimental realization of a sigmoid shape response in one of the gate inputswhich is a desirable shape for biocomputing application as it allows reduction of the analog noise . A kinetic model is also developed and used to evaluate the extent to which the experimentally realized gate is close to optimal.
Biochemical computing is an emerging field of unconventional computing that attempts to process information with biomolecules and biological objects using digital logic. In this work we survey filtering in general, in biochemical computing, and summarize the experimental realization of an AND logic gate with sigmoid response in one of the inputs. The logic gate is realized with electrode-immobilized glucose-6-phosphate dehydrogenase enzyme that catalyzes a reaction corresponding to the Boolean AND functions . A kinetic model is also developed and used to evaluate the extent to which the performance of the experimentally realized logic gate is close to optimal.
[ { "type": "R", "before": "Boolean", "after": "digital", "start_char_pos": 155, "end_char_pos": 162 }, { "type": "R", "before": "electrode-immobilized glucose-6-phosphate dehydrogenase enzyme catalyzed a reaction which carries out the Boolean AND logic gate. We report the first", "after": "we survey filtering in general, in biochemical computing, and summarize the", "start_char_pos": 183, "end_char_pos": 332 }, { "type": "R", "before": "a sigmoid shape", "after": "an AND logic gate with sigmoid", "start_char_pos": 361, "end_char_pos": 376 }, { "type": "R", "before": "gate inputswhich is a desirable shape for biocomputing application as it allows reduction of the analog noise", "after": "inputs. The logic gate is realized with electrode-immobilized glucose-6-phosphate dehydrogenase enzyme that catalyzes a reaction corresponding to the Boolean AND functions", "start_char_pos": 400, "end_char_pos": 509 }, { "type": "R", "before": "experimentally realized", "after": "performance of the experimentally realized logic", "start_char_pos": 591, "end_char_pos": 614 } ]
[ 0, 169, 312, 511 ]
0912.4723
1
Despite the availability of very detailed data on financial market, agent-based modeling is hindered by the lack of information about real-trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution to the building of a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, we find that the transaction cost structure determines on average to a large extend the relationship between the mean turnoverper transaction of an investor and his mean wealth. A simple extension of CAPM that includes variable transaction costs is able to reproduce qualitatively the observed behaviors. We argue that this shows the collective ability of a population to construct a mean-variance portfolio that takes into account transaction costs.
Despite the availability of very detailed data on financial market, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution to the building of a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest on-line Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring into light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs
[ { "type": "R", "before": "real-trader", "after": "real trader", "start_char_pos": 134, "end_char_pos": 145 }, { "type": "R", "before": "we find that the transaction cost structure determines on average to a large extend the relationship between the mean turnoverper transaction of an investor and his mean wealth. A simple extension of CAPM that", "after": "the largest on-line Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially", "start_char_pos": 408, "end_char_pos": 617 }, { "type": "R", "before": "qualitatively", "after": "faithfully", "start_char_pos": 675, "end_char_pos": 688 }, { "type": "R", "before": "argue that this shows", "after": "finally argue that our results bring into light", "start_char_pos": 716, "end_char_pos": 737 }, { "type": "R", "before": "transaction costs.", "after": "the structure of transaction costs", "start_char_pos": 840, "end_char_pos": 858 } ]
[ 0, 155, 257, 358, 585, 712 ]
0912.4898
1
We study probability distributions of money, income, and energy consumption per capita for ensembles of economic agents. Following the principle of entropy maximization for partitioning of a limited resource , we find exponential distributions for the investigated variables. We also discuss fluxes of money and population between two systems with different money temperatures. For income distribution, we study a stochastic process with additive and multiplicative components . The resultant income distribution interpolates between exponential at the low end and power-law at the high end, in agreement with the empirical data for USA. We discuss how the increase of income inequality in USA in 1983-2007 results from dramatic increase of the income fraction going to the upper tail and exceeding 20\% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world is reasonably well described by the exponential function. Comparing the data for 1990, 2000, and 2005, we discuss the effects of globalization on the inequality of energy consumption.
Probability distributions of money, income, and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced . The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for USA. We show that the increase of income inequality in USA originates primarily from the increase of the income fraction going to the upper tail , which now exceeds 20\% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000, and 2005, we discuss the effect of globalization on the inequality of energy consumption.
[ { "type": "R", "before": "We study probability", "after": "Probability", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "A", "before": null, "after": "are studied", "start_char_pos": 87, "end_char_pos": 87 }, { "type": "R", "before": "Following the", "after": "The", "start_char_pos": 122, "end_char_pos": 135 }, { "type": "R", "before": ", we find", "after": "gives", "start_char_pos": 209, "end_char_pos": 218 }, { "type": "R", "before": "We also discuss fluxes of money and population between two systems with different money temperatures. For", "after": "A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe", "start_char_pos": 277, "end_char_pos": 382 }, { "type": "D", "before": "we study", "after": null, "start_char_pos": 404, "end_char_pos": 412 }, { "type": "A", "before": null, "after": "is introduced", "start_char_pos": 478, "end_char_pos": 478 }, { "type": "D", "before": "income", "after": null, "start_char_pos": 495, "end_char_pos": 501 }, { "type": "R", "before": "power-law", "after": "power law", "start_char_pos": 567, "end_char_pos": 576 }, { "type": "R", "before": "discuss how", "after": "show that", "start_char_pos": 643, "end_char_pos": 654 }, { "type": "R", "before": "in 1983-2007 results from dramatic", "after": "originates primarily from the", "start_char_pos": 696, "end_char_pos": 730 }, { "type": "R", "before": "and exceeding", "after": ", which now exceeds", "start_char_pos": 787, "end_char_pos": 800 }, { "type": "R", "before": "is reasonably well", "after": "can be approximately", "start_char_pos": 962, "end_char_pos": 980 }, { "type": "R", "before": "effects", "after": "effect", "start_char_pos": 1080, "end_char_pos": 1087 } ]
[ 0, 121, 276, 378, 480, 639, 826, 1019 ]
1001.1379
1
In this article we extend earlier work on the jump-diffusion risk-sensitive asset management problem by allowing for jumps in both the factor process and the asset prices as well as stochastic volatility and investment constraints. In this case, the HJB equation is a PIDE.By combining viscosity solutions with a change of notation, a policy improvement argument and classical results on parabolic PDEs we prove that the PIDE admits a unique smooth solution. A verification theorem concludes the resolutions of this problem .
This paper considers a portfolio optimization problem in which asset prices are represented by SDEs driven by Brownian motion and a Poisson random measure, with drifts that are functions of an auxiliary diffusion factor process. The criterion, following earlier work by Bielecki, Pliska, Nagai and others, is risk-sensitive optimization (equivalent to maximizing the expected growth rate subject to a constraint on variance.) By using a change of measure technique introduced by Kuroda and Nagai we show that the problem reduces to solving a certain stochastic control problem in the factor process, which has no jumps. The main result of the paper is to show that the risk-sensitive jump diffusion problem can be fully characterized in terms of a parabolic Hamilton-Jacobi-Bellman PDE rather than a PIDE, and that this PDE admits a classical C^{1,2 .
[ { "type": "R", "before": "In this article we extend earlier work on the jump-diffusion", "after": "This paper considers a portfolio optimization problem in which asset prices are represented by SDEs driven by Brownian motion and a Poisson random measure, with drifts that are functions of an auxiliary diffusion factor process. The criterion, following earlier work by Bielecki, Pliska, Nagai and others, is", "start_char_pos": 0, "end_char_pos": 60 }, { "type": "R", "before": "asset management problem by allowing for jumps in both the factor process and the asset prices as well as stochastic volatility and investment constraints. In this case, the HJB equation is", "after": "optimization (equivalent to maximizing the expected growth rate subject to a constraint on variance.) By using", "start_char_pos": 76, "end_char_pos": 265 }, { "type": "D", "before": "PIDE.By combining viscosity solutions with a", "after": null, "start_char_pos": 268, "end_char_pos": 312 }, { "type": "R", "before": "notation, a policy improvement argument and classical results on parabolic PDEs we prove that the PIDE admits a unique smooth solution. A verification theorem concludes the resolutions of this problem", "after": "measure technique introduced by Kuroda and Nagai we show that the problem reduces to solving a certain stochastic control problem in the factor process, which has no jumps. The main result of the paper is to show that the risk-sensitive jump diffusion problem can be fully characterized in terms of a parabolic Hamilton-Jacobi-Bellman PDE rather than a PIDE, and that this PDE admits a classical C^{1,2", "start_char_pos": 323, "end_char_pos": 523 } ]
[ 0, 231, 273, 458 ]
1001.1902
1
Recently, hybrid architectures using accelerators like GPGPUs or the Cell processor have gained much interest in the HPC community. The RapidMind Multi-Core Development Platform is a programming environment that allows generating code which is able to seamlessly run on hardware accelerators like GPUs or the Cell processor and multicore CPUs both from AMD and Intel. This paper describes the ports of three mathematical kernels to RapidMind which are chosen as synthetic benchmarks and representatives of scientific codes. Performance of these kernels has been measured on various RapidMind backends (cuda, cell and x86) and compared to other - hardware-specific - implementations (using CUDA, Cell SDK and Intel MKL). The results give an insight in the degree of portability of RapidMind code and code performance across different architectures.
Recently, hybrid architectures using accelerators like GPGPUs or the Cell processor have gained much interest in the HPC community. The RapidMind Multi-Core Development Platform is a programming environment that allows generating code which is able to seamlessly run on hardware accelerators like GPUs or the Cell processor and multicore CPUs both from AMD and Intel. This paper describes the ports of three mathematical kernels to RapidMind which are chosen as synthetic benchmarks and representatives of scientific codes. Performance of these kernels has been measured on various RapidMind backends (cuda, cell and x86) and compared to other hardware-specific implementations (using CUDA, Cell SDK and Intel MKL). The results give an insight in the degree of portability of RapidMind code and code performance across different architectures.
[ { "type": "D", "before": "-", "after": null, "start_char_pos": 644, "end_char_pos": 645 }, { "type": "D", "before": "-", "after": null, "start_char_pos": 664, "end_char_pos": 665 } ]
[ 0, 131, 367, 523, 719 ]
1001.1944
1
The stability properties of two different classes of metabolic cycles are investigated using a combination of analytical and computational techniques . Using principles from structural kinetic modeling (SKM), it is shown that the stability of well-ordered metabolic networks can be studied using exclusively analytical techniques. The guaranteed stability of a class of single input, single output metabolic cycles is established . Next, parameter regimes for the stability of a small autocatalytic cycle are determined. It is demonstrated that analytical methods can be used to understand the relationship between kinetic parameters and stability, and that results from these analytical methods can be confirmed with computational experiments. Results suggest that elevated metabolite concentrations and certain crucial saturation parameters can strongly affect the stability of the entire metabolic cycle. These conclusions support the hypothesis that certain types of metabolic cycles may have played a role in the development of primitive metabolism despite the absence of regulatory machinery. Furthermore, the results suggest that the role of allosteric control mechanismsin biochemical networks may be greater than simply stabilizing the network .
We investigate the stability properties of two different classes of metabolic cycles using a combination of analytical and computational methods . Using principles from structural kinetic modeling (SKM), we show that the stability of metabolic networks with certain structural regularities can be studied using exclusively analytical techniques. We then apply these technique to a class of single input, single output metabolic cycles , and find that stability is guaranteed under a wide range of conditions . Next, we extend our analysis to a small autocatalytic cycle , and determine parameter regimes within which the cycle is very likely to be stable. We demonstrate that analytical methods can be used to understand the relationship between kinetic parameters and stability, and that results from these analytical methods can be confirmed with computational experiments. In addition, our results suggest that elevated metabolite concentrations and certain crucial saturation parameters can strongly affect the stability of the entire metabolic cycle. We discuss our results in light of the possibility that evolutionary forces may select for metabolic network topologies with a high intrinsic probability of being stable. Furthermore, our conclusions support the hypothesis that certain types of metabolic cycles may have played a role in the development of primitive metabolism despite the absence of regulatory mechanisms .
[ { "type": "R", "before": "The", "after": "We investigate the", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "D", "before": "are investigated", "after": null, "start_char_pos": 70, "end_char_pos": 86 }, { "type": "R", "before": "techniques", "after": "methods", "start_char_pos": 139, "end_char_pos": 149 }, { "type": "R", "before": "it is shown", "after": "we show", "start_char_pos": 209, "end_char_pos": 220 }, { "type": "R", "before": "well-ordered metabolic networks", "after": "metabolic networks with certain structural regularities", "start_char_pos": 243, "end_char_pos": 274 }, { "type": "R", "before": "The guaranteed stability of", "after": "We then apply these technique to", "start_char_pos": 331, "end_char_pos": 358 }, { "type": "R", "before": "is established", "after": ", and find that stability is guaranteed under a wide range of conditions", "start_char_pos": 415, "end_char_pos": 429 }, { "type": "R", "before": "parameter regimes for the stability of", "after": "we extend our analysis to", "start_char_pos": 438, "end_char_pos": 476 }, { "type": "R", "before": "are determined. It is demonstrated", "after": ", and determine parameter regimes within which the cycle is very likely to be stable. We demonstrate", "start_char_pos": 505, "end_char_pos": 539 }, { "type": "R", "before": "Results", "after": "In addition, our results", "start_char_pos": 745, "end_char_pos": 752 }, { "type": "R", "before": "These", "after": "We discuss our results in light of the possibility that evolutionary forces may select for metabolic network topologies with a high intrinsic probability of being stable. Furthermore, our", "start_char_pos": 908, "end_char_pos": 913 }, { "type": "R", "before": "machinery. Furthermore, the results suggest that the role of allosteric control mechanismsin biochemical networks may be greater than simply stabilizing the network", "after": "mechanisms", "start_char_pos": 1088, "end_char_pos": 1252 } ]
[ 0, 330, 520, 744, 907, 1098 ]
1001.1944
2
We investigate the stability properties of two different classes of metabolic cycles using a combination of analytical and computational methods. Using principles from structural kinetic modeling (SKM), we show that the stability of metabolic networks with certain structural regularities can be studied using exclusively analytical techniques. We then apply these technique to a class of single input, single output metabolic cycles, and find that stability is guaranteed under a wide range of conditions . Next, we extend our analysis to a small autocatalytic cycle, and determine parameter regimes within which the cycle is very likely to be stable. We demonstrate that analytical methods can be used to understand the relationship between kinetic parameters and stability, and that results from these analytical methods can be confirmed with computational experiments. In addition, our results suggest that elevated metabolite concentrations and certain crucial saturation parameters can strongly affect the stability of the entire metabolic cycle. We discuss our results in light of the possibility that evolutionary forces may select for metabolic network topologies with a high intrinsic probability of being stable. Furthermore, our conclusions support the hypothesis that certain types of metabolic cycles may have played a role in the development of primitive metabolism despite the absence of regulatory mechanisms.
We investigate the stability properties of two different classes of metabolic cycles using a combination of analytical and computational methods. Using principles from structural kinetic modeling (SKM), we show that the stability of metabolic networks with certain structural regularities can be studied using a combination of analytical and computational techniques. We then apply these techniques to a class of single input, single output metabolic cycles, and find that the cycles are stable under all conditions tested . Next, we extend our analysis to a small autocatalytic cycle, and determine parameter regimes within which the cycle is very likely to be stable. We demonstrate that analytical methods can be used to understand the relationship between kinetic parameters and stability, and that results from these analytical methods can be confirmed with computational experiments. In addition, our results suggest that elevated metabolite concentrations and certain crucial saturation parameters can strongly affect the stability of the entire metabolic cycle. We discuss our results in light of the possibility that evolutionary forces may select for metabolic network topologies with a high intrinsic probability of being stable. Furthermore, our conclusions support the hypothesis that certain types of metabolic cycles may have played a role in the development of primitive metabolism despite the absence of regulatory mechanisms.
[ { "type": "R", "before": "exclusively analytical", "after": "a combination of analytical and computational", "start_char_pos": 310, "end_char_pos": 332 }, { "type": "R", "before": "technique", "after": "techniques", "start_char_pos": 365, "end_char_pos": 374 }, { "type": "R", "before": "stability is guaranteed under a wide range of conditions", "after": "the cycles are stable under all conditions tested", "start_char_pos": 449, "end_char_pos": 505 } ]
[ 0, 145, 344, 652, 872, 1052, 1223 ]
1001.2678
1
Consider a frictionless market trading a finite number of co-maturing European call and put options written on a risky asset plus an instrument with path-dependent payoff known as a weighted variance swap, e. g. a vanilla variance swap or a corridor variance swap. The question we ask is: Do the traded prices admit an arbitrage opportunity? We determine necessary and sufficient model-free conditions for the price of a continuously monitored weighted variance swap to be consistent with absence of arbitrage. We discuss in detail the types of arbitrage that may arise when the determined conditions are not satisfied. In particular we find that prices of European call/puts are not enough for the upper bound price of the vanilla variance swap to be finite. We show that given an extra piece of information, namely the price of an additional asset, a finite bound can be explicitly determined .
We develop robust pricing and hedging of a weighted variance swap when market prices for a finite number of co--maturing put options are given. We assume the given prices do not admit arbitrage and deduce no-arbitrage bounds on the weighted variance swap along with super- and sub- replicating strategies which enforce them. We find that market quotes for variance swaps are surprisingly close to the model-free lower bounds we determine. We solve the problem by transforming it into an analogous question for a European option with a convex payoff. The lower bound becomes a problem in semi-infinite linear programming which we solve in detail. The upper bound is explicit. We work in a model-independent and probability-free setup. In particular we use and extend F\"ollmer's pathwise stochastic calculus. Appropriate notions of arbitrage and admissibility are introduced. This allows us to establish the usual hedging relation between the variance swap and the 'log contract' and similar connections for weighted variance swaps. Our results take form of a FTAP: we show that the absence of (weak) arbitrage is equivalent to the existence of a classical model which reproduces the observed prices via risk-neutral expectations of discounted payoffs .
[ { "type": "R", "before": "Consider a frictionless market trading", "after": "We develop robust pricing and hedging of a weighted variance swap when market prices for", "start_char_pos": 0, "end_char_pos": 38 }, { "type": "R", "before": "co-maturing European call and put options written on a risky asset plus an instrument with path-dependent payoff known as a weighted variance swap, e. g. a vanilla variance swap or a corridor variance swap. The question we ask is: Do the traded prices admit an arbitrage opportunity? We determine necessary and sufficient model-free conditions for the price of a continuously monitored", "after": "co--maturing put options are given. We assume the given prices do not admit arbitrage and deduce no-arbitrage bounds on the", "start_char_pos": 58, "end_char_pos": 443 }, { "type": "R", "before": "to be consistent with absence of arbitrage. We discuss in detail the types of arbitrage that may arise when the determined conditions are not satisfied.", "after": "along with super- and sub- replicating strategies which enforce them. We find that market quotes for variance swaps are surprisingly close to the model-free lower bounds we determine. We solve the problem by transforming it into an analogous question for a European option with a convex payoff. The lower bound becomes a problem in semi-infinite linear programming which we solve in detail. The upper bound is explicit. We work in a model-independent and probability-free setup.", "start_char_pos": 467, "end_char_pos": 619 }, { "type": "R", "before": "find that prices of European call/puts are not enough for the upper bound price of the vanilla variance swap to be finite. We show that given an extra piece of information, namely the price of an additional asset, a finite bound can be explicitly determined", "after": "use and extend F\\\"ollmer's pathwise stochastic calculus. Appropriate notions of arbitrage and admissibility are introduced. This allows us to establish the usual hedging relation between the variance swap and the 'log contract' and similar connections for weighted variance swaps. Our results take form of a FTAP: we show that the absence of (weak) arbitrage is equivalent to the existence of a classical model which reproduces the observed prices via risk-neutral expectations of discounted payoffs", "start_char_pos": 637, "end_char_pos": 894 } ]
[ 0, 264, 341, 510, 619, 759 ]
1001.2831
1
We present the quantum model of Bertrand duopoly and study the entanglement behaviour on the profit functions of the firms. Using the concept of optimal response of each firm to the price of the opponent, we found four Nash equilibria for maximally entangled initial state. We have shown that only one point among the four Nash equilibria has valid physical meaning. The very presence of quantum entanglement in the initial state gives payoffs higher to the firms than the classical payoffs at the physically valid point for higher values of substitution parameter .
We present the quantum model of Bertrand duopoly and study the entanglement behavior on the profit functions of the firms. Using the concept of optimal response of each firm to the price of the opponent, we found only one Nash equilibirum point for maximally entangled initial state. The very presence of quantum entanglement in the initial state gives payoffs higher to the firms than the classical payoffs at the Nash equilibrium. As a result the dilemma like situation in the classical game is resolved .
[ { "type": "R", "before": "behaviour", "after": "behavior", "start_char_pos": 76, "end_char_pos": 85 }, { "type": "R", "before": "four Nash equilibria", "after": "only one Nash equilibirum point", "start_char_pos": 214, "end_char_pos": 234 }, { "type": "D", "before": "We have shown that only one point among the four Nash equilibria has valid physical meaning.", "after": null, "start_char_pos": 274, "end_char_pos": 366 }, { "type": "R", "before": "physically valid point for higher values of substitution parameter", "after": "Nash equilibrium. As a result the dilemma like situation in the classical game is resolved", "start_char_pos": 498, "end_char_pos": 564 } ]
[ 0, 123, 273, 366 ]
1001.3355
1
Modern Internet services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where one queue models each of the individual computers in the system. A key challenge is that the data is incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model parameters using Markov chain Monte Carlo. We evaluate our framework on data from a benchmark Web application. We also present a simple technique for selection among nested queueing models. We are unaware of any previous work that considers inference in networks of queues in the presence of missing data.
Modern Internet services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where each queue models one of the computers in the system. A key challenge is that the data are incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model parameters using Markov chain Monte Carlo. We evaluate our framework on data from a benchmark Web application. We also present a simple technique for selection among nested queueing models. We are unaware of any previous work that considers inference in networks of queues in the presence of missing data.
[ { "type": "R", "before": "one queue models each of the individual", "after": "each queue models one of the", "start_char_pos": 352, "end_char_pos": 391 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 450, "end_char_pos": 452 } ]
[ 0, 145, 296, 416, 583, 746, 930, 1072, 1140, 1219 ]
1001.4401
1
The level crossing analysis of DAX and oil price time series are given. We determine the average frequency of positive-slope crossings, \alpha^+, where T_{\alpha} =1/\alpha^+ is the average waiting time for observing the level \alpha again. We estimate the probability P(K, \alpha), which provides us the probability of observing K times of the level \alpha with positive slope, in time scale T_{\alpha}. For analyzed time series we found that maximum K is about \approx 6. We show that by using the level crossing analysis one can forecasts the DAX and oil time series (normalized log-returns) with good precision for the levels in the interval -0.1 < \alpha < 0.1 and -0.35 < \alpha < 0.35, respectively .
The level crossing analysis of DAX and oil price time series are given. We determine the average frequency of positive-slope crossings, \alpha^+, where T_{\alpha} =1/\alpha^+ is the average waiting time for observing the level \alpha again. We estimate the probability P(K, \alpha), which provides us the probability of observing K times of the level \alpha with positive slope, in time scale T_{\alpha}. For analyzed time series we found that maximum K is about \approx 6. We show that by using the level crossing analysis one can forecasts the DAX and oil time series (normalized log-returns) with good precision for the levels in the interval -0.5 < \alpha < 0.5 .
[ { "type": "R", "before": "-0.1", "after": "-0.5", "start_char_pos": 646, "end_char_pos": 650 }, { "type": "R", "before": "0.1 and -0.35 < \\alpha < 0.35, respectively", "after": "0.5", "start_char_pos": 662, "end_char_pos": 705 } ]
[ 0, 71, 240, 404, 473 ]
1001.4401
2
The level crossing analysis of DAX and oil price time series are given. We determine the average frequency of positive-slope crossings, \alpha^+, where T_{\alpha} =1/\alpha^+ is the average waiting time for observing the level \alpha again. We estimate the probability P(K, \alpha), which provides us the probability of observing K times of the level \alpha with positive slope, in time scale T_{\alpha}. For analyzed time series we found that maximum K is about \approx 6. We show that by using the level crossing analysis one can forecasts the DAX and oil time series (normalized log-returns ) with good precision for the levels in the interval -0.5 < \alpha < 0.5 .
The level crossing and inverse statistics analysis of DAX and oil price time series are given. We determine the average frequency of positive-slope crossings, \alpha^+, where T_{\alpha} =1/\alpha^+ is the average waiting time for observing the level \alpha again. We estimate the probability P(K, \alpha), which provides us the probability of observing K times of the level \alpha with positive slope, in time scale T_{\alpha}. For analyzed time series we found that maximum K is about 6. We show that by using the level crossing analysis one can estimate how the DAX and oil time series will develop. We carry out same analysis for the increments of DAX and oil price log-returns ,(which is known as inverse statistics) and provide the distribution of waiting times to observe some level for the increments .
[ { "type": "A", "before": null, "after": "and inverse statistics", "start_char_pos": 19, "end_char_pos": 19 }, { "type": "D", "before": "\\approx", "after": null, "start_char_pos": 464, "end_char_pos": 471 }, { "type": "R", "before": "forecasts", "after": "estimate how", "start_char_pos": 533, "end_char_pos": 542 }, { "type": "R", "before": "(normalized", "after": "will develop. We carry out same analysis for the increments of DAX and oil price", "start_char_pos": 571, "end_char_pos": 582 }, { "type": "R", "before": ") with good precision for the levels in the interval -0.5 < \\alpha < 0.5", "after": ",(which is known as inverse statistics) and provide the distribution of waiting times to observe some level for the increments", "start_char_pos": 595, "end_char_pos": 667 } ]
[ 0, 72, 241, 405, 474 ]
1001.5258
1
The development of systemic approaches in biology has put emphasis on identifying genetic modules whose behavior can be modeled accurately so as to gain insight into their structure and function. However most gene circuits in a cell are under control of external signals and thus quantitative agreement between experimental data and a mathematical model is difficult. Circadian biology has been one notable exception: quantitative models of the internal clock that orchestrates biological processes over the 24-hour diurnal cycle have been constructed for a URLanisms, from cyanobacteria to plants and mammals. In most cases, a complex architecture with interlocked feedback loops has been evidenced. Here we present first modeling results for the circadian clock of the green unicellular alga Ostreococcus tauri . Two plant-like clock genes have been shown to play a central role in Ostreococcus clock. We find that their expression time profiles can be accurately reproduced by a minimal model of a two-gene transcriptional feedback loop. Remarkably, best adjustment of data recorded under light/dark alternation is obtained for vanishing coupling between the oscillator and the forcing cycle. This suggests that coupling to light is confined to specific time intervals and has no effect when the oscillator is entrained by the diurnal cycle. This intringuing property may reflect a strategy to minimize the impact of fluctuations in daylight intensity on the core circadian oscillator, a type of perturbation that has been rarely considered when assessing the robustness of circadian clocks.
The development of systemic approaches in biology has put emphasis on identifying genetic modules whose behavior can be modeled accurately so as to gain insight into their structure and function. However most gene circuits in a cell are under control of external signals and thus quantitative agreement between experimental data and a mathematical model is difficult. Circadian biology has been one notable exception: quantitative models of the internal clock that orchestrates biological processes over the 24-hour diurnal cycle have been constructed for a URLanisms, from cyanobacteria to plants and mammals. In most cases, a complex architecture with interlocked feedback loops has been evidenced. Here we present first modeling results for the circadian clock of the green unicellular alga Ostreococcus tauri . Two plant-like clock genes have been shown to play a central role in Ostreococcus clock. We find that their expression time profiles can be accurately reproduced by a minimal model of a two-gene transcriptional feedback loop. Remarkably, best adjustment of data recorded under light/dark alternation is obtained when assuming that the oscillator is not coupled to the diurnal cycle. This suggests that coupling to light is confined to specific time intervals and has no dynamical effect when the oscillator is entrained by the diurnal cycle. This intringuing property may reflect a strategy to minimize the impact of fluctuations in daylight intensity on the core circadian oscillator, a type of perturbation that has been rarely considered when assessing the robustness of circadian clocks.
[ { "type": "R", "before": "Ostreococcus tauri", "after": "Ostreococcus tauri", "start_char_pos": 794, "end_char_pos": 812 }, { "type": "R", "before": "Ostreococcus", "after": "Ostreococcus", "start_char_pos": 884, "end_char_pos": 896 }, { "type": "R", "before": "for vanishing coupling between the oscillator and the forcing", "after": "when assuming that the oscillator is not coupled to the diurnal", "start_char_pos": 1127, "end_char_pos": 1188 }, { "type": "A", "before": null, "after": "dynamical", "start_char_pos": 1283, "end_char_pos": 1283 } ]
[ 0, 195, 367, 610, 700, 814, 903, 1040, 1195, 1345 ]
1002.0377
1
This is a short essay I wrote for an online publication affiliated with the business school at the University of Technology, Sydney. I discuss how the methods used in physics can apply to economics in general and financial markets specifically.
This is a short commentary piece that discusses how the methods used in the natural sciences can apply to economics in general and financial markets specifically.
[ { "type": "R", "before": "essay I wrote for an online publication affiliated with the business school at the University of Technology, Sydney. I discuss", "after": "commentary piece that discusses", "start_char_pos": 16, "end_char_pos": 142 }, { "type": "R", "before": "physics", "after": "the natural sciences", "start_char_pos": 167, "end_char_pos": 174 } ]
[ 0, 132 ]
1002.0668
1
We introduce a method to convert an ensemble of sequences of symbols into a weighted directed network whose nodes are motifs, while the directed links and their weights are defined from statistically significant co-occurences of two motifs in the same sequence. The method is shown to be able to correlate sequences with function in the human proteome database, and might find other useful applications for structure discovery also in computational linguistics, in the analysis of time series, and in the theory of dynamical systems .
We introduce a method to convert an ensemble of sequences of symbols into a weighted directed network whose nodes are motifs, while the directed links and their weights are defined from statistically significant co-occurences of two motifs in the same sequence. The analysis of communities of networks of motifs is shown to be able to correlate sequences with functions in the human proteome database, to detect hot topics from online social dialogs, to characterize trajectories of dynamical systems, and might find other useful applications to process large amount of data in various fields .
[ { "type": "R", "before": "method", "after": "analysis of communities of networks of motifs", "start_char_pos": 266, "end_char_pos": 272 }, { "type": "R", "before": "function", "after": "functions", "start_char_pos": 321, "end_char_pos": 329 }, { "type": "A", "before": null, "after": "to detect hot topics from online social dialogs, to characterize trajectories of dynamical systems,", "start_char_pos": 362, "end_char_pos": 362 }, { "type": "R", "before": "for structure discovery also in computational linguistics, in the analysis of time series, and in the theory of dynamical systems", "after": "to process large amount of data in various fields", "start_char_pos": 404, "end_char_pos": 533 } ]
[ 0, 261 ]
1002.1054
1
Switching and bistability has been recognized as an important feature of dynamical systems originating in Systems Biology. In the majority of cases bistability of the dynamical system is established numerically. However, parameter uncertainty is a predominant issue in Systems Biology : the dynamical systems consist of a large number of states and parameters, while measurement data are often very noisy and data points and repetitions are usually few. Hence techniques allowing the analytic computation of parameter vectors where a given system exhibits bistability are desirable. Here we present a new sufficient condition for a large class of mass action networks. This condition takes the form of linear inequality systems. And it is constructive in the sense that solutions to one of the inequality systems determine state and parameter vectors where the underlying mass action systems -- under explicitly stated genericity conditions -- undergo a saddle-node bifurcation .
Many biochemical processes can successfully be described by dynamical systems allowing some form of switching when, depending on their initial conditions, solutions of the dynamical system end up in different regions of state space (associated with different biochemical functions). Switching is often realized by a bistable system (i.e. a dynamical system allowing two stable steady state solutions) and, in the majority of cases , bistability is established numerically. In our point of view this approach is too restrictive, as, one the one hand, due to predominant parameter uncertainty numerical methods are generally difficult to apply to realistic models originating in Systems Biology . And on the other hand switching already arises with the occurrence of a saddle type steady state (characterized by a Jacobian where exactly one Eigenvalue is positive and the remaining eigenvalues have negative real part). Consequently we derive conditions based on linear inequalities that allow the analytic computation of states and parameters where the Jacobian derived from a mass action network has a defective zero eigenvalue so that -- under certain genericity conditions -- a saddle-node bifurcation occurs. Our conditions are applicable to general mass action networks involving at least one conservation relation, however, they are only sufficient (as infeasibility of linear inequalities does not exclude defective zero eigenvalues) .
[ { "type": "R", "before": "Switching and bistability has been recognized as an important feature of dynamical systems originating in Systems Biology. In", "after": "Many biochemical processes can successfully be described by dynamical systems allowing some form of switching when, depending on their initial conditions, solutions of the dynamical system end up in different regions of state space (associated with different biochemical functions). Switching is often realized by a bistable system (i.e. a dynamical system allowing two stable steady state solutions) and, in", "start_char_pos": 0, "end_char_pos": 125 }, { "type": "R", "before": "bistability of the dynamical system", "after": ", bistability", "start_char_pos": 148, "end_char_pos": 183 }, { "type": "R", "before": "However, parameter uncertainty is a predominant issue", "after": "In our point of view this approach is too restrictive, as, one the one hand, due to predominant parameter uncertainty numerical methods are generally difficult to apply to realistic models originating", "start_char_pos": 212, "end_char_pos": 265 }, { "type": "R", "before": ": the dynamical systems consist of a large number of states and parameters, while measurement data are often very noisy and data points and repetitions are usually few. Hence techniques allowing the analytic computation of parameter vectors where a given system exhibits bistability are desirable. Here we present a new sufficient condition for a large class of mass action networks. This condition takes the form of linear inequality systems. And it is constructive in the sense that solutions to one of the inequality systems determine state and parameter vectors where the underlying mass action systems", "after": ". And on the other hand switching already arises with the occurrence of a saddle type steady state (characterized by a Jacobian where exactly one Eigenvalue is positive and the remaining eigenvalues have negative real part). Consequently we derive conditions based on linear inequalities that allow the analytic computation of states and parameters where the Jacobian derived from a mass action network has a defective zero eigenvalue so that", "start_char_pos": 285, "end_char_pos": 891 }, { "type": "R", "before": "explicitly stated", "after": "certain", "start_char_pos": 901, "end_char_pos": 918 }, { "type": "D", "before": "undergo", "after": null, "start_char_pos": 944, "end_char_pos": 951 }, { "type": "A", "before": null, "after": "occurs. Our conditions are applicable to general mass action networks involving at least one conservation relation, however, they are only sufficient (as infeasibility of linear inequalities does not exclude defective zero eigenvalues)", "start_char_pos": 978, "end_char_pos": 978 } ]
[ 0, 122, 211, 453, 582, 668, 728 ]
1002.1070
1
Inspired by the bankruptcy of Lehman Brothers and its consequences on the global financial system, we develop a simple model in which the Lehman default event is quantified as having an almost immediate effect in worsening the credit worthiness of all financial institutions in the economic network. In our stylized description, all properties of a given firm are captured by its effective credit rating, which follows a simple dynamics of co-evolution with the credit ratings of the other firms in our economic network. The existence of a global phase transition explains the large susceptibility of the system to negative shocks. We show that bailing out the first few defaulting firms does not solve the problem, but does have the effect of alleviating considerably the global shock, as measured by the fraction of firms that are not defaulting as a consequence. This beneficial effect is the counterpart of the large vulnerability of the system of coupled firms, which are both the direct consequences of the collective URLanized endogenous behaviors of the credit ratings of the firms in our economic network.
Inspired by the bankruptcy of Lehman Brothers and its consequences on the global financial system, we develop a simple model in which the Lehman default event is quantified as having an almost immediate effect in worsening the credit worthiness of all financial institutions in the economic network. In our stylized description, all properties of a given firm are captured by its effective credit rating, which follows a simple dynamics of co-evolution with the credit ratings of the other firms in our economic network. The dynamics resembles the evolution of Potts spin-glass with external global field corresponding to a panic effect in the economy. The existence of a global phase transition , between paramagnetic and ferromagnetic phases, explains the large susceptibility of the system to negative shocks. We show that bailing out the first few defaulting firms does not solve the problem, but does have the effect of alleviating considerably the global shock, as measured by the fraction of firms that are not defaulting as a consequence. This beneficial effect is the counterpart of the large vulnerability of the system of coupled firms, which are both the direct consequences of the collective URLanized endogenous behaviors of the credit ratings of the firms in our economic network.
[ { "type": "A", "before": null, "after": "dynamics resembles the evolution of Potts spin-glass with external global field corresponding to a panic effect in the economy. The", "start_char_pos": 525, "end_char_pos": 525 }, { "type": "A", "before": null, "after": ", between paramagnetic and ferromagnetic phases,", "start_char_pos": 565, "end_char_pos": 565 } ]
[ 0, 299, 520, 633, 867 ]
1002.1826
1
The constraints imposed by nano- and microscale confinement on the conformational degrees of freedom of thermally fluctuating biopolymers is utilized in contemporary nano-devices to specifically elongate and manipulate single chains. A thorough theoretical understanding and quantification of the statistical conformations of confined polymer chains is thus a central concern in polymer physics. We present an analytical calculation of the radial distribution function of harmonically confined semiflexible polymers in the weakly bending limit. Special emphasis has been put on a proper treatment of boundary conditions. We show that the particular choice of boundary conditions significantly impacts the chain statistics in cases of weak and intermediate confinement. Comparing our analytical model to numerical data from Monte Carlo simulations we find excellent agreement over a broad range of parameters.
The constraints imposed by nano- and microscale confinement on the conformational degrees of freedom of thermally fluctuating biopolymers are utilized in contemporary nano-devices to specifically elongate and manipulate single chains. A thorough theoretical understanding and quantification of the statistical conformations of confined polymer chains is thus a central concern in polymer physics. We present an analytical calculation of the radial distribution function of harmonically confined semiflexible polymers in the weakly bending limit. Special emphasis has been put on a proper treatment of global modes, i.e. the possibility of the chain to perform global movements within the channel. We show that the effect of these global modes significantly impacts the chain statistics in cases of weak and intermediate confinement. Comparing our analytical model to numerical data from Monte Carlo simulations we find excellent agreement over a broad range of parameters.
[ { "type": "R", "before": "is", "after": "are", "start_char_pos": 138, "end_char_pos": 140 }, { "type": "R", "before": "boundary conditions.", "after": "global modes, i.e. the possibility of the chain to perform global movements within the channel.", "start_char_pos": 600, "end_char_pos": 620 }, { "type": "R", "before": "particular choice of boundary conditions", "after": "effect of these global modes", "start_char_pos": 638, "end_char_pos": 678 } ]
[ 0, 233, 395, 544, 620, 768 ]
1002.2169
1
We emphasize that assuming the ] existence of an elongated, hybridized form of DNA ('S-DNA') does not immediately imply DNA overstretching to consist of the complete or near-complete conversion of the molecule from B- to S-form. Instead, this assumption implies in general a more complex dynamic coexistence of hybridized and unhybridized forms of DNA. We argue that such coexistence can rationalize several recent experimental observations.
Recent experiments J. van Mameren et al. PNAS 106, 18231 (2009)] provide a detailed spatial picture of overstretched DNA, showing that under certain conditions the two strands of the double helix separate at about 65 pN. It was proposed that this observation rules out the existence of an elongated, hybridized form of DNA ('S-DNA') . Here we argue that the S-DNA picture is consistent with the observation of unpeeling during overstretching. We demonstrate that assuming the existence of S-DNA does not imply DNA overstretching to consist of the complete or near-complete conversion of the molecule from B- to S-form. Instead, this assumption implies in general a more complex dynamic coexistence of hybridized and unhybridized forms of DNA. We argue that such coexistence can rationalize several recent experimental observations.
[ { "type": "R", "before": "We emphasize that assuming the", "after": "Recent experiments", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "A", "before": null, "after": "J. van Mameren et al. PNAS 106, 18231 (2009)", "start_char_pos": 31, "end_char_pos": 31 }, { "type": "A", "before": null, "after": "provide a detailed spatial picture of overstretched DNA, showing that under certain conditions the two strands of the double helix separate at about 65 pN. It was proposed that this observation rules out the", "start_char_pos": 33, "end_char_pos": 33 }, { "type": "R", "before": "does not immediately", "after": ". Here we argue that the S-DNA picture is consistent with the observation of unpeeling during overstretching. We demonstrate that assuming the existence of S-DNA does not", "start_char_pos": 94, "end_char_pos": 114 } ]
[ 0, 229, 353 ]
1002.2604
1
The impact of default events on the loss distribution of a credit portfolio can be assessed by determining the loss distribution conditional on these events. While it is conceptually easy to estimate loss distributions conditional on default events by means of Monte Carlo simulation, it becomes impractical for two or more simultaneous defaults as the conditioning event is extremely rare. We provide an analytical approach to the calculation of the conditional loss distribution for the CreditRisk+ portfolio model with independent random loss given default distributions. The analytical solution for this case can be used to study the properties of the conditional loss distributions and to discuss how they relate to the identification of risk concentrations .
The impact of a stress scenario of default events on the loss distribution of a credit portfolio can be assessed by determining the loss distribution conditional on these events. While it is conceptually easy to estimate loss distributions conditional on default events by means of Monte Carlo simulation, it becomes impractical for two or more simultaneous defaults as then the conditioning event is extremely rare. We provide an analytical approach to the calculation of the conditional loss distribution for the CreditRisk+ portfolio model with independent random loss given default distributions. The analytical solution for this case can be used to check the accuracy of an approximation to the conditional loss distribution whereby the unconditional model is run with stressed input probabilities of default (PDs). It turns out that this approximation is unbiased. Numerical examples, however, suggest that the approximation may be seriously inaccurate but that the inaccuracy leads to overestimation of tail losses and hence the approach errs on the conservative side .
[ { "type": "A", "before": null, "after": "a stress scenario of", "start_char_pos": 14, "end_char_pos": 14 }, { "type": "A", "before": null, "after": "then", "start_char_pos": 350, "end_char_pos": 350 }, { "type": "R", "before": "study the properties of", "after": "check the accuracy of an approximation to", "start_char_pos": 630, "end_char_pos": 653 }, { "type": "R", "before": "distributions and to discuss how they relate to the identification of risk concentrations", "after": "distribution whereby the unconditional model is run with stressed input probabilities of default (PDs). It turns out that this approximation is unbiased. Numerical examples, however, suggest that the approximation may be seriously inaccurate but that the inaccuracy leads to overestimation of tail losses and hence the approach errs on the conservative side", "start_char_pos": 675, "end_char_pos": 764 } ]
[ 0, 158, 392, 576 ]
1002.2909
1
We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firms ability to avoid default even if companys liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a companys default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. The obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds in different categories of speculative-grade ratings with varying time to maturity.
We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm's ability to avoid default even if company's liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company's default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for non-zero credit spreads for bonds with very short time to maturity.
[ { "type": "R", "before": "firms", "after": "firm's", "start_char_pos": 130, "end_char_pos": 135 }, { "type": "R", "before": "companys", "after": "company's", "start_char_pos": 169, "end_char_pos": 177 }, { "type": "R", "before": "companys", "after": "company's", "start_char_pos": 313, "end_char_pos": 321 }, { "type": "R", "before": "The obtained", "after": "Obtained", "start_char_pos": 522, "end_char_pos": 534 }, { "type": "A", "before": null, "after": "of companies", "start_char_pos": 672, "end_char_pos": 672 }, { "type": "A", "before": null, "after": "time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for non-zero credit spreads for bonds with very short", "start_char_pos": 739, "end_char_pos": 739 } ]
[ 0, 78, 223, 338, 521 ]
1002.3493
1
Typical protocols for peer-to-peer file sharing over the Internet divide files to be shared into pieces. New peers strive to obtain a complete collection of pieces from other peers and from a seed. In this paper we identify a problem that can occur if the seeding rate is not large enough. The problem is that, even if the statistics of the system are symmetric in the pieces, there can be symmetry breaking, with one piece becoming very rare. If peers depart after obtaining a complete collection, they can tend to leave before helping other peers receive the rare piece .
Typical protocols for peer-to-peer file sharing over the Internet divide files to be shared into pieces. New peers strive to obtain a complete collection of pieces from other peers and from a seed. In this paper we investigate a problem that can occur if the seeding rate is not large enough. The problem is that, even if the statistics of the system are symmetric in the pieces, there can be symmetry breaking, with one piece becoming very rare. If peers depart after obtaining a complete collection, they can tend to leave before helping other peers receive the rare piece . Assuming that peers arrive with no pieces, there is a single seed, random peer contacts are made, random useful pieces are downloaded, and peers depart upon receiving the complete file, the system is stable if the seeding rate (in pieces per time unit) is greater than the arrival rate, and is unstable if the seeding rate is less than the arrival rate. The result persists for any piece selection policy that selects from among useful pieces, such as rarest first, and it persists with the use of network coding .
[ { "type": "R", "before": "identify", "after": "investigate", "start_char_pos": 215, "end_char_pos": 223 }, { "type": "A", "before": null, "after": ". Assuming that peers arrive with no pieces, there is a single seed, random peer contacts are made, random useful pieces are downloaded, and peers depart upon receiving the complete file, the system is stable if the seeding rate (in pieces per time unit) is greater than the arrival rate, and is unstable if the seeding rate is less than the arrival rate. The result persists for any piece selection policy that selects from among useful pieces, such as rarest first, and it persists with the use of network coding", "start_char_pos": 572, "end_char_pos": 572 } ]
[ 0, 104, 197, 289, 443 ]
1002.3747
1
We investigate the large-volatility dynamics in financial markets, based on the minutely and daily data of the Chinese Indices and German DAX. The dynamic relaxation both before and after large volatilities is characterized by a power law, and the exponents p_\pm usually vary with the strength of the large volatilities. The large-volatility dynamics is time-reversal symmetric at the minutely time scale , while asymmetric at the daily time scale. Careful analysis reveals that the time-reversal asymmetry is mainly induced by exogenous events. It is also the exogenous events which drive the financial dynamics to a non-stationary state. In general, the Chinese Indices and German DAX are in different universality classes. An interacting herding model without and with exogenous driving forces could qualitatively describe the large-volatility dynamics .
We investigate the large-volatility dynamics in financial markets, based on the minute-to-minute and daily data of the Chinese Indices and German DAX. The dynamic relaxation both before and after large volatilities is characterized by a power law, and the exponents p_\pm usually vary with the strength of the large volatilities. The large-volatility dynamics is time-reversal symmetric at the time scale in minutes , while asymmetric at the daily time scale. Careful analysis reveals that the time-reversal asymmetry is mainly induced by exogenous events. It is also the exogenous events which drive the financial dynamics to a non-stationary state. Different characteristics of the Chinese and German stock markets are uncovered .
[ { "type": "R", "before": "minutely", "after": "minute-to-minute", "start_char_pos": 80, "end_char_pos": 88 }, { "type": "R", "before": "minutely time scale", "after": "time scale in minutes", "start_char_pos": 386, "end_char_pos": 405 }, { "type": "R", "before": "In general, the Chinese Indices and German DAX are in different universality classes. An interacting herding model without and with exogenous driving forces could qualitatively describe the large-volatility dynamics", "after": "Different characteristics of the Chinese and German stock markets are uncovered", "start_char_pos": 641, "end_char_pos": 856 } ]
[ 0, 142, 321, 449, 546, 640, 726 ]
1002.3809
1
Gene expression is an inherently noisy process capable of displaying phenotypic variation despite constant environmental conditions. This stochastic behavior results from fluctuations in the transcription and translation of genes between identical cells . DNA looping, which is a common means of regulating transcription, is very much a stochastic process; the loops arise from the thermal motion of the DNA and other fluctuations of the cellular environment. We present single-molecule measurements of DNA loop formation and breakdown when an artificial fluctuating force, applied to mimic a fluctuating cellular environment, is imposed on the DNA. We show that loop formation is greatly enhanced in the presence of noise , yet find that hypothetical regulatory schemes that employ mechanical tension in the DNA--as a sensitive switch to control transcription--can be surprisingly robust due to a fortuitous cancellation of noise effects.
Living cells provide a fluctuating, out-of-equilibrium environment in which genes must coordinate cellular function . DNA looping, which is a common means of regulating transcription, is very much a stochastic process; the loops arise from the thermal motion of the DNA and other fluctuations of the cellular environment. We present single-molecule measurements of DNA loop formation and breakdown when an artificial fluctuating force, applied to mimic a fluctuating cellular environment, is imposed on the DNA. We show that loop formation is greatly enhanced in the presence of noise of only a fraction of k_B T , yet find that hypothetical regulatory schemes that employ mechanical tension in the DNA--as a sensitive switch to control transcription--can be surprisingly robust due to a fortuitous cancellation of noise effects.
[ { "type": "R", "before": "Gene expression is an inherently noisy process capable of displaying phenotypic variation despite constant environmental conditions. This stochastic behavior results from fluctuations in the transcription and translation of genes between identical cells", "after": "Living cells provide a fluctuating, out-of-equilibrium environment in which genes must coordinate cellular function", "start_char_pos": 0, "end_char_pos": 253 }, { "type": "A", "before": null, "after": "of only a fraction of k_B T", "start_char_pos": 723, "end_char_pos": 723 } ]
[ 0, 132, 356, 459, 649 ]
1002.4744
1
We observe the performances of three strategy evaluation schemes, which are the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with the Markov chain of order \le 2. Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increasing or decreasing . On the other hand, the minority game is suitable for a market with persistent zig-zag price patterns. These observations suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.
Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents' strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game , in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order \le 2. Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease . On the other hand, the minority game is suitable for a market with persistent zig-zag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.
[ { "type": "A", "before": null, "after": "Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents' strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "performances", "after": "performance", "start_char_pos": 16, "end_char_pos": 28 }, { "type": "D", "before": "which are", "after": null, "start_char_pos": 67, "end_char_pos": 76 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 184, "end_char_pos": 184 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 336, "end_char_pos": 339 }, { "type": "R", "before": "increasing or decreasing", "after": "increase or decrease", "start_char_pos": 697, "end_char_pos": 721 }, { "type": "R", "before": "These observations", "after": "We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings", "start_char_pos": 826, "end_char_pos": 844 } ]
[ 0, 245, 475, 600, 723, 825 ]
1002.5046
1
A simple kinematic model for the trajectories of Listeria monocytogenes is generalized to a dynamical system rich enough to exhibit the resonant Hopf bifurcation structure of excitable media and simple enough to be studied geometrically. It is shown how the effectiveness of the L. monocytogenes model is an instance of a more general phenomenon in aggregate systems exhibited by the chemical agents propelling the bacteria .
A simple kinematic model for the trajectories of Listeria monocytogenes is generalized to a dynamical system rich enough to exhibit the resonant Hopf bifurcation structure of excitable media and simple enough to be studied geometrically. It is shown how L. monocytogenes trajectories and meandering spiral waves URLanized by the same type of attracting set .
[ { "type": "D", "before": "the effectiveness of the", "after": null, "start_char_pos": 254, "end_char_pos": 278 }, { "type": "R", "before": "model is an instance of a more general phenomenon in aggregate systems exhibited by the chemical agents propelling the bacteria", "after": "trajectories and meandering spiral waves URLanized by the same type of attracting set", "start_char_pos": 296, "end_char_pos": 423 } ]
[ 0, 237 ]
1003.2539
1
We make use of wavelet transform to study the multi-scale, self similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies' (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. behavior \mbox{%DIFAUXCMD hes5 }0pt%DIFAUXCMD of the fluctuations starting with high frequency fluctuations. } We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k^{-3} power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
We make use of wavelet transform to study the multi-scale, self similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies' (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k^{-3 behavior \mbox{%DIFAUXCMD hes5 }0pt%DIFAUXCMD of the fluctuations starting with high frequency fluctuations. } We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k^{-3} power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
[ { "type": "A", "before": null, "after": "One of the primary motivations of this work is to study the emergence of the k^{-3", "start_char_pos": 526, "end_char_pos": 526 } ]
[ 0, 185, 322, 635, 835, 1033 ]
1003.2791
1
Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus, often supported by protein activity-dependent inactivation. Adaptive response is thought to be related to various cellular functions such as homeostasis and enlargement of dynamic range by background compensation. Here we study the quantitative relation between adaptive response and background compensation within the frameworkof mathematical models. We show that any particular type of adaptive response is neither sufficient nor necessary for an effective adaptive enlargement of dynamic range. We propose that a general mechanism for adaptive dynamic range enlargement comes about from the activity-dependent modulation of protein responsiveness by multiple biochemical modification . Therefore hierarchical biochemical processes such as methylation and phosphorylation are natural candidates to induce such adaptive dynamic range enlargement .
Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus, often supported by protein activity-dependent inactivation. Adaptive response is thought to be related to various cellular functions such as homeostasis and enlargement of dynamic range by background compensation. Here we study the quantitative relation between adaptive response and background compensation within a modeling framework. In contrast to the commonly held view, we show that any particular type of adaptive response is neither sufficient nor necessary for adaptive enlargement of dynamic range. In particular a precise adaptive response, where system activity is maintained at a constant level at steady state, does not ensure a large dynamic range neither in input signal nor in system output. A general mechanism for input dynamic range enlargement comes about from the activity-dependent modulation of protein responsiveness by multiple biochemical modification , regardless of the type of adaptive response it induces . Therefore hierarchical biochemical processes such as methylation and phosphorylation are natural candidates to induce this property in signalling systems .
[ { "type": "R", "before": "the frameworkof mathematical models. We", "after": "a modeling framework. In contrast to the commonly held view, we", "start_char_pos": 434, "end_char_pos": 473 }, { "type": "D", "before": "an effective", "after": null, "start_char_pos": 565, "end_char_pos": 577 }, { "type": "R", "before": "We propose that a", "after": "In particular a precise adaptive response, where system activity is maintained at a constant level at steady state, does not ensure a large dynamic range neither in input signal nor in system output. A", "start_char_pos": 617, "end_char_pos": 634 }, { "type": "R", "before": "adaptive", "after": "input", "start_char_pos": 657, "end_char_pos": 665 }, { "type": "A", "before": null, "after": ", regardless of the type of adaptive response it induces", "start_char_pos": 806, "end_char_pos": 806 }, { "type": "R", "before": "such adaptive dynamic range enlargement", "after": "this property in signalling systems", "start_char_pos": 927, "end_char_pos": 966 } ]
[ 0, 178, 332, 470, 616, 808 ]
1003.2791
2
Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus, often supported by protein activity-dependent inactivation. Adaptive response is thought to be related to various cellular functions such as homeostasis and enlargement of dynamic range by background compensation. Here we study the quantitative relation between adaptive response and background compensation within a modeling framework. In contrast to the commonly held view, we show that any particular type of adaptive response is neither sufficient nor necessary for adaptive enlargement of dynamic range. In particular a precise adaptive response, where system activity is maintained at a constant level at steady state, does not ensure a large dynamic range neither in input signal nor in system output. A general mechanism for input dynamic range enlargement comes about from the activity-dependent modulation of protein responsiveness by multiple biochemical modification, regardless of the type of adaptive response it induces. Therefore hierarchical biochemical processes such as methylation and phosphorylation are natural candidates to induce this property in signalling systems.
Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus, often supported by protein activity-dependent inactivation. Adaptive response is thought to be related to various cellular functions such as homeostasis and enlargement of dynamic range by background compensation. Here we study the quantitative relation between adaptive response and background compensation within a modeling framework. We show that any particular type of adaptive response is neither sufficient nor necessary for adaptive enlargement of dynamic range. In particular a precise adaptive response, where system activity is maintained at a constant level at steady state, does not ensure a large dynamic range neither in input signal nor in system output. A general mechanism for input dynamic range enlargement can come about from the activity-dependent modulation of protein responsiveness by multiple biochemical modification, regardless of the type of adaptive response it induces. Therefore hierarchical biochemical processes such as methylation and phosphorylation are natural candidates to induce this property in signaling systems.
[ { "type": "R", "before": "In contrast to the commonly held view, we", "after": "We", "start_char_pos": 456, "end_char_pos": 497 }, { "type": "R", "before": "comes", "after": "can come", "start_char_pos": 884, "end_char_pos": 889 }, { "type": "R", "before": "signalling", "after": "signaling", "start_char_pos": 1190, "end_char_pos": 1200 } ]
[ 0, 178, 332, 455, 627, 827, 1054 ]
1003.2811
1
Proteins interact with other proteins within biological pathways, forming connected subgraphs in the protein-protein interactome (PPI). However, proteins are often involved in multiple biological pathways which complicates inference about interactions between proteins. Gene expression data is informative here since genes within a particular pathway tend to have more correlated expression patterns than genes from distinct pathways. We provide an algorithm that uses gene expression information to remove inter-pathway protein-protein interactions, thereby simplifying the structure of the protein-protein interactome. This refined topology permits easier interpretation of multiple biological pathways simultaneously.
Proteins interact with other proteins within biological pathways, forming connected subgraphs in the protein-protein interactome (PPI). Proteins are often involved in multiple biological pathways which complicates interpretation of interactions between proteins. Gene expression data can assist our inference since genes within a particular pathway tend to have more correlated expression patterns than genes from distinct pathways. We provide an algorithm that uses gene expression information to remove inter-pathway protein-protein interactions, thereby simplifying the structure of the protein-protein interactome. This refined topology permits easier interpretation and greater biological coherence of multiple biological pathways simultaneously.
[ { "type": "R", "before": "However, proteins", "after": "Proteins", "start_char_pos": 136, "end_char_pos": 153 }, { "type": "R", "before": "inference about", "after": "interpretation of", "start_char_pos": 223, "end_char_pos": 238 }, { "type": "R", "before": "is informative here", "after": "can assist our inference", "start_char_pos": 291, "end_char_pos": 310 }, { "type": "A", "before": null, "after": "and greater biological coherence", "start_char_pos": 673, "end_char_pos": 673 } ]
[ 0, 135, 269, 434, 620 ]
1003.3093
1
A relationship between the preexponent of the rate constant and the distribution over activation barrier energies for enzymatic/protein reactions is revealed. We consider an enzyme solution is an ensemble of individual molecules with different values of the activation barrier energy described by the distribution. From solvent viscosity effect on the preexponent we derive the integral equation for the distribution and find its approximate solution. Our approach enables us to attain a twofold goal. Firstly it yields a simple interpretation of solvent viscosity dependence for enzymatic/protein reactions that requires neither a modification of the Kramers' theory nor that of the Stokes law. Secondly our approach enables us to deduce the form of the distribution over activation barrier energies. The obtained function has a familiar bell-shaped form and is in qualitative agreement with results of single enzyme kinetics measurements. General formalism is exemplified by the analysis of literature experimental data.
A relationship between the preexponent of the rate constant and the distribution over activation barrier energies for enzymatic/protein reactions is revealed. We consider an enzyme solution as an ensemble of individual molecules with different values of the activation barrier energy described by the distribution. From the solvent viscosity effect on the preexponent we derive the integral equation for the distribution and find its approximate solution. Our approach enables us to attain a twofold purpose. On the one hand it yields a simple interpretation of the solvent viscosity dependence for enzymatic/protein reactions that requires neither a modification of the Kramers' theory nor that of the Stokes law. On the other hand our approach enables us to deduce the form of the distribution over activation barrier energies. The obtained function has a familiar bell-shaped form and is in qualitative agreement with the results of single enzyme kinetics measurements. General formalism is exemplified by the analysis of literature experimental data.
[ { "type": "R", "before": "is", "after": "as", "start_char_pos": 190, "end_char_pos": 192 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 320, "end_char_pos": 320 }, { "type": "R", "before": "goal. Firstly", "after": "purpose. On the one hand", "start_char_pos": 497, "end_char_pos": 510 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 548, "end_char_pos": 548 }, { "type": "R", "before": "Secondly", "after": "On the other hand", "start_char_pos": 698, "end_char_pos": 706 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 895, "end_char_pos": 895 } ]
[ 0, 158, 314, 452, 502, 697, 803, 943 ]
1003.3114
1
We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a URLanized model of cascade spreading based on a probabilistic response of the system 's elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation and displays a power-law cascade-size distribution-similar to the empirically observed one-over a wide range of parameters .
We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a URLanized model of cascade spreading based on a probabilistic response of the system elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation . For a wide range of parameters, the system is in a critical state and displays a power-law cascade-size distribution similar to the empirically observed one. We further generalize the model to reproduce volatility clustering and other observed properties of real stocks .
[ { "type": "D", "before": "'s", "after": null, "start_char_pos": 265, "end_char_pos": 267 }, { "type": "A", "before": null, "after": ". For a wide range of parameters, the system is in a critical state", "start_char_pos": 395, "end_char_pos": 395 }, { "type": "R", "before": "distribution-similar", "after": "distribution similar", "start_char_pos": 434, "end_char_pos": 454 }, { "type": "R", "before": "one-over a wide range of parameters", "after": "one. We further generalize the model to reproduce volatility clustering and other observed properties of real stocks", "start_char_pos": 483, "end_char_pos": 518 } ]
[ 0, 125, 298 ]
1003.4299
1
In this paper we analyze the so-called Parisian ruin probability that happens when the surplus process stays below zero longer than fixed amount of time \zeta>0. We focus on general spectrally negative L\'{e}vy insurance risk process. For this class of processes we identify the expression for the ruin probability in terms of some other quantities that could be possibly calculated explicitly in many models. We find its Cram\'{e}r-type and convolution-equivalent asymptotics when reserves tends to infinity. Finally, we analyze few explicit examples.
In this paper we analyze so-called Parisian ruin probability that happens when surplus process stays below zero longer than fixed amount of time \zeta>0. We focus on general spectrally negative L\'{e}vy insurance risk process. For this class of processes we identify expression for ruin probability in terms of some other quantities that could be possibly calculated explicitly in many models. We find its Cram\'{e}r-type and convolution-equivalent asymptotics when reserves tends to infinity. Finally, we analyze few explicit examples.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 25, "end_char_pos": 28 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 83, "end_char_pos": 86 }, { "type": "R", "before": "the expression for the", "after": "expression for", "start_char_pos": 275, "end_char_pos": 297 } ]
[ 0, 161, 234, 409, 509 ]
1003.4406
1
A general procedure to compute the output distribution D for certain nonlinear operators S (so called erosion-dilation cascades) is given. A refinement of the procedure yields D for some important LULU-operators S. Properties of D can also be used to characterize smoothing properties of a nonlinear operator S.
Two procedures to compute the output distribution phi_S of certain stack filters S (so called erosion-dilation cascades) are given. One rests on the disjunctive normal form of S and also yields the rank selection probabilities. The other is based on inclusion-exclusion and e.g. yields phi_S for some important LULU-operators S. Properties of phi_S can be used to characterize smoothing properties of S.
[ { "type": "R", "before": "A general procedure", "after": "Two procedures", "start_char_pos": 0, "end_char_pos": 19 }, { "type": "R", "before": "D for certain nonlinear operators", "after": "phi_S of certain stack filters", "start_char_pos": 55, "end_char_pos": 88 }, { "type": "R", "before": "is given. A refinement of the procedure yields D", "after": "are given. One rests on the disjunctive normal form of S and also yields the rank selection probabilities. The other is based on inclusion-exclusion and e.g. yields phi_S", "start_char_pos": 129, "end_char_pos": 177 }, { "type": "R", "before": "D can also", "after": "phi_S can", "start_char_pos": 229, "end_char_pos": 239 }, { "type": "D", "before": "a nonlinear operator", "after": null, "start_char_pos": 288, "end_char_pos": 308 } ]
[ 0, 138 ]
1003.5514
1
We consider the pricing of derivatives written on the discrete realized variance of an underlying security. In the literature, the realized variance is usually approximated by its continuous-time limit, the quadratic variation of the underlying log-price. Here, we characterize the short-time limits of call options on both objects. We find that the difference strongly depends on whether or not the stock price process has jumps. To study the exact valuation of options on the discrete realized varianceitself, we then propose a novel approach that allows to apply Fourier-Laplace techniques to price European-style options efficiently. To illustrate our results , we also present some numerical examples.
We consider the pricing of derivatives written on the discretely sampled realized variance of an underlying security. In the literature, the realized variance is usually approximated by its continuous-time limit, the quadratic variation of the underlying log-price. Here, we characterize the small-time limits of options on both objects. We find that the difference between them strongly depends on whether or not the stock price process has jumps. Subsequently, we propose two new methods to evaluate the price of options on the discretely sampled realized variance. One of the methods is approximative; it is based on correcting prices of options on quadratic variation by our asymptotic results. The other method is exact; it uses a novel randomization approach and applies Fourier-Laplace techniques . We compare the methods and illustrate our results by some numerical examples.
[ { "type": "R", "before": "discrete", "after": "discretely sampled", "start_char_pos": 54, "end_char_pos": 62 }, { "type": "R", "before": "short-time limits of call", "after": "small-time limits of", "start_char_pos": 282, "end_char_pos": 307 }, { "type": "A", "before": null, "after": "between them", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "To study the exact valuation", "after": "Subsequently, we propose two new methods to evaluate the price", "start_char_pos": 432, "end_char_pos": 460 }, { "type": "R", "before": "discrete realized varianceitself, we then propose a novel approach that allows to apply", "after": "discretely sampled realized variance. One of the methods is approximative; it is based on correcting prices of options on quadratic variation by our asymptotic results. The other method is exact; it uses a novel randomization approach and applies", "start_char_pos": 479, "end_char_pos": 566 }, { "type": "R", "before": "to price European-style options efficiently. To", "after": ". We compare the methods and", "start_char_pos": 594, "end_char_pos": 641 }, { "type": "R", "before": ", we also present", "after": "by", "start_char_pos": 665, "end_char_pos": 682 } ]
[ 0, 107, 255, 332, 431, 638 ]
1003.5650
1
In 1999 Robert Fernholz observed an inconsistency between the normative assumption of existence of an equivalent martingale measure (EMM) and the empirical reality of diversity in equity markets. We explore a method of imposing diversity on market models by a type of antitrust regulation that is compatible with EMMs. The regulatory procedure breaks up companies that become too large, while holding the total number of companies constant by imposing a simultaneous merge of other companies. As an example, regulation is imposed on a market model in which diversity is maintained via a log-pole in the drift of the largest company .
In 1999 Robert Fernholz observed an inconsistency between the normative assumption of existence of an equivalent martingale measure (EMM) and the empirical reality of diversity in equity markets. We explore a method of imposing diversity on market models by a type of antitrust regulation that is compatible with EMMs. The regulatory procedure breaks up companies that become too large, while holding the total number of companies constant by imposing a simultaneous merge of other companies. The regulatory events are assumed to have no impact on portfolio values. As an example, regulation is imposed on a market model in which diversity is maintained via a log-pole in the drift of the largest company . The result is the removal of arbitrage opportunities from this market while maintaining the market's diversity .
[ { "type": "A", "before": null, "after": "The regulatory events are assumed to have no impact on portfolio values.", "start_char_pos": 493, "end_char_pos": 493 }, { "type": "A", "before": null, "after": ". The result is the removal of arbitrage opportunities from this market while maintaining the market's diversity", "start_char_pos": 633, "end_char_pos": 633 } ]
[ 0, 195, 318, 492 ]
1004.0336
1
MicroRNAs are endogenous non coding RNAs which negatively regulate the expression of protein-coding genes in plants and animals. They are known to play an important role in several biological processes and, together with transcription factors, form a complex and highly interconnected regulatory network. Looking at the structure of this network it is possible to recognize a few overrepresented motifs which are expected to perform important elementary regulatory functions. Among them a special role is played by the microRNA-mediated feedforward loop in which a master transcription factor regulates a microRNA and, together with it, a set of target genes. In this paper we show analytically and through simulations that the incoherent version of this motif can couple the fine-tuning of target protein level with an efficient noise control, thus conferring precision and stability to the overall gene expression program, especially in the presence of fluctuations in upstream regulators. Among the other results, a nontrivial prediction of our model is that the optimal attenuation of fluctuations coincides with a modest repression of the target expression. This feature is coherent with the expected fine-tuning function and in agreement with experimental observations of the actual impact of a wide class of microRNAs on the protein output of their targets .
MicroRNAs are endogenous non-coding RNAs which negatively regulate the expression of protein-coding genes in plants and animals. They are known to play an important role in several biological processes and, together with transcription factors, form a complex and highly interconnected regulatory network. Looking at the structure of this network it is possible to recognize a few overrepresented motifs which are expected to perform important elementary regulatory functions. Among them a special role is played by the microRNA-mediated feedforward loop in which a master transcription factor regulates a microRNA and, together with it, a set of target genes. In this paper we show analytically and through simulations that the incoherent version of this motif can couple the fine-tuning of a target protein level with an efficient noise control, thus conferring precision and stability to the overall gene expression program, especially in the presence of fluctuations in upstream regulators. Among the other results, a nontrivial prediction of our model is that the optimal attenuation of fluctuations coincides with a modest repression of the target expression. This feature is coherent with the expected fine-tuning function and in agreement with experimental observations of the actual impact of a wide class of microRNAs on the protein output of their targets . Finally we describe the impact on noise-buffering efficiency of the cross-talk between microRNA targets that can naturally arise if the microRNA-mediated circuit is not considered as isolated, but embedded in a larger network of regulations .
[ { "type": "R", "before": "non coding", "after": "non-coding", "start_char_pos": 25, "end_char_pos": 35 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 791, "end_char_pos": 791 }, { "type": "A", "before": null, "after": ". Finally we describe the impact on noise-buffering efficiency of the cross-talk between microRNA targets that can naturally arise if the microRNA-mediated circuit is not considered as isolated, but embedded in a larger network of regulations", "start_char_pos": 1365, "end_char_pos": 1365 } ]
[ 0, 128, 304, 475, 659, 992, 1163 ]
1004.0395
1
Peer-to-peer swarming is one of the de facto solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate such a dependenceof peers on a publisher . For this purpose, we propose a new metric, namely swarm self-sustainability . A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulations . Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.
Peer-to-peer swarming is one of the de facto solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate this dependence . For this purpose, we propose a new metric, namely swarm self-sustainability . A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulation . Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.
[ { "type": "R", "before": "de facto", "after": "de facto", "start_char_pos": 36, "end_char_pos": 44 }, { "type": "R", "before": "such a dependenceof peers on a publisher", "after": "this dependence", "start_char_pos": 445, "end_char_pos": 485 }, { "type": "R", "before": "swarm self-sustainability", "after": "swarm self-sustainability", "start_char_pos": 538, "end_char_pos": 563 }, { "type": "R", "before": "simulations", "after": "simulation", "start_char_pos": 1110, "end_char_pos": 1121 } ]
[ 0, 113, 219, 299, 414, 487, 565, 657, 755, 934, 983, 1058, 1123 ]
1004.0534
1
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a modified version of the direct retry algorithm. The model differs in many respects from previous works on load balancing, in particular by incorporating the call admission process, through random access; by allowing the differentiation of users based on their priorities; and by incorporating received signal properties. The analysis illustrates that, for example, as more shared channels are used for load balancing an increase in overall system channel utilization is observed. However, more interestingly, the improvement in blocking probability per shared channel for load balanced users is maximized at an intermediate number of shared channels as opposed to the maximum number of these shared resources .
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a modified version of the direct retry algorithm. The model differs in many respects from previous works on load balancing, in particular by incorporating the call admission process, through random access; by allowing the differentiation of users based on their priorities; and by incorporating received signal properties. The quantitative results illustrate that, for example, as more shared channels are used for load balancing an increase in overall system channel utilization is observed. However, more interestingly, the improvement in blocking probability per shared channel for load balanced users is maximized at an intermediate number of shared channels as opposed to the maximum number of these shared resources . This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network. Our analysis proves that cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase .
[ { "type": "R", "before": "analysis illustrates", "after": "quantitative results illustrate", "start_char_pos": 444, "end_char_pos": 464 }, { "type": "A", "before": null, "after": ". This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network. Our analysis proves that cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase", "start_char_pos": 828, "end_char_pos": 828 } ]
[ 0, 166, 322, 390, 439, 598 ]
1004.0534
2
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a modified version of the direct retry algorithm . The model differs in many respects from previous works on load balancing , in particular by incorporating the call admission process, through random access ; by allowing the differentiation of users based on their priorities ; and by incorporating received signal properties . The quantitative results illustrate that, for example, as more shared channels are used for load balancing an increase in overall system channel utilization is observed. However, more interestingly, the improvement in blocking probability per shared channel for load balanced users is maximized at an intermediate number of shared channels as opposed to the maximum number of these shared resources. This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network . Our analysis proves that cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase .
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a new algorithm called direct retry with truncated offloading channel resource pool (DR_{K . The model differs in many respects from previous works on load balancing . Foremost, it incorporates the call admission process, through random access . In specific, the proposed model implements the Physical Random Access Channel used in 3GPP network standards. Furthermore, the proposed model allows the differentiation of users based on their priorities . The quantitative results illustrate that, for example, cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase. Our analysis allow to quantitatively determine the blocking probability individual users will experience given a specific length of random access phase. Furthermore, we observe that the improvement in blocking probability per shared channel for load balanced users using DR_{K is maximized at an intermediate number of shared channels , as opposed to the maximum number of these shared resources. This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network .
[ { "type": "R", "before": "modified version of the direct retry algorithm", "after": "new algorithm called direct retry with truncated offloading channel resource pool (DR_{K", "start_char_pos": 119, "end_char_pos": 165 }, { "type": "R", "before": ", in particular by incorporating", "after": ". Foremost, it incorporates", "start_char_pos": 241, "end_char_pos": 273 }, { "type": "R", "before": "; by allowing the", "after": ". In specific, the proposed model implements the Physical Random Access Channel used in 3GPP network standards. Furthermore, the proposed model allows the", "start_char_pos": 324, "end_char_pos": 341 }, { "type": "D", "before": "; and by incorporating received signal properties", "after": null, "start_char_pos": 393, "end_char_pos": 442 }, { "type": "R", "before": "as more shared channels are used for load balancing an increase in overall system channel utilization is observed. However, more interestingly,", "after": "cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase. Our analysis allow to quantitatively determine the blocking probability individual users will experience given a specific length of random access phase. Furthermore, we observe that", "start_char_pos": 500, "end_char_pos": 643 }, { "type": "A", "before": null, "after": "using DR_{K", "start_char_pos": 727, "end_char_pos": 727 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 786, "end_char_pos": 786 }, { "type": "D", "before": ". Our analysis proves that cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase", "after": null, "start_char_pos": 991, "end_char_pos": 1184 } ]
[ 0, 167, 325, 394, 444, 614, 846, 992 ]
1004.0534
3
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a new algorithm called direct retry with truncated offloading channel resource pool (DR_{K}). The model differs in many respects from previous works on load balancing. Foremost, it incorporates the call admission process, through random access. In specific, the proposed model implements the Physical Random Access Channel used in 3GPP network standards. Furthermore, the proposed model allows the differentiation of users based on their priorities. The quantitative results illustrate that, for example, cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase. Our analysis allow to quantitatively determine the blocking probability individual users will experience given a specific length of random access phase. Furthermore, we observe that the improvement in blocking probability per shared channel for load balanced users using DR_{K} is maximized at an intermediate number of shared channels, as opposed to the maximum number of these shared resources. This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network .
We present an analytical framework for modeling a priority-based load balancing scheme in cellular networks based on a new algorithm called direct retry with truncated offloading channel resource pool (DR_{K}). The model , developed for a baseline case of two cell network, differs in many respects from previous works on load balancing. Foremost, it incorporates the call admission process, through random access. In specific, the proposed model implements the Physical Random Access Channel used in 3GPP network standards. Furthermore, the proposed model allows the differentiation of users based on their priorities. The quantitative results illustrate that, for example, cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply adjusting the length of the random access phase. Our analysis also allows for the quantitative determination of the blocking probability individual users will experience given a specific length of random access phase. Furthermore, we observe that the improvement in blocking probability per shared channel for load balanced users using DR_{K} is maximized at an intermediate number of shared channels, as opposed to the maximum number of these shared resources. This occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network . We also present an extension of our analytical model to a multi-cell network (by means of an approximation) and an application of the proposed load balancing scheme in the context of opportunistic spectrum access .
[ { "type": "A", "before": null, "after": ", developed for a baseline case of two cell network,", "start_char_pos": 221, "end_char_pos": 221 }, { "type": "R", "before": "manipulating", "after": "adjusting", "start_char_pos": 739, "end_char_pos": 751 }, { "type": "R", "before": "allow to quantitatively determine the", "after": "also allows for the quantitative determination of the", "start_char_pos": 804, "end_char_pos": 841 }, { "type": "A", "before": null, "after": ". We also present an extension of our analytical model to a multi-cell network (by means of an approximation) and an application of the proposed load balancing scheme in the context of opportunistic spectrum access", "start_char_pos": 1332, "end_char_pos": 1332 } ]
[ 0, 210, 285, 362, 472, 567, 790, 943, 1187 ]
1004.0595
1
Sustaining efficiency and stability by properly controlling the equity to asset ratio is one of the most important and difficult challenges in bank management. Due to unexpected and abrupt decline of asset values, a bank must closely monitor its net worth as well as market conditions, and one of its important concerns is when to raise more capital so as not to violate capital adequacy requirements. In this paper, we model the tradeoff between avoiding costs of delay and premature capital raising, and solve the corresponding optimal stopping problem. In order to model defaults in a bank's loan/credit business portfolios, we represent its net worth by appropriate Levy processes, and solve explicitly for the double exponential jump diffusion process . In particular, for the spectrally negative case, we generalize the formulation using the scale function, and obtain explicitly the optimal solutions for the exponential jump diffusion process.
Sustaining efficiency and stability by properly controlling the equity to asset ratio is one of the most important and difficult challenges in bank management. Due to unexpected and abrupt decline of asset values, a bank must closely monitor its net worth as well as market conditions, and one of its important concerns is when to raise more capital so as not to violate capital adequacy requirements. In this paper, we model the tradeoff between avoiding costs of delay and premature capital raising, and solve the corresponding optimal stopping problem. In order to model defaults in a bank's loan/credit business portfolios, we represent its net worth by Levy processes, and solve explicitly for the double exponential jump diffusion process and for a general spectrally negative Levy process.
[ { "type": "D", "before": "appropriate", "after": null, "start_char_pos": 658, "end_char_pos": 669 }, { "type": "R", "before": ". In particular, for the spectrally negative case, we generalize the formulation using the scale function, and obtain explicitly the optimal solutions for the exponential jump diffusion", "after": "and for a general spectrally negative Levy", "start_char_pos": 757, "end_char_pos": 942 } ]
[ 0, 159, 401, 555, 758 ]
1004.0685
1
The model is aimed to discriminate the %DIFDELCMD < {\guillemotleft}%%% good%DIFDELCMD < {\guillemotright} %%% and the%DIFDELCMD < {\guillemotleft}%%% bad%DIFDELCMD < {\guillemotright} %%% companies in Russian corporate sector based on their financial statements data (Russian Standards). The data sample consists of 126 Russian public companies- issuers of Ruble bonds which represent about 36\% of total number of corporate bonds issuers. 25 companies have defaulted on their debt in 2008-2009 which represent around 30\% of default cases. 29\% companies in the sample have credit ratings assigned compared to 34\% in the parent population. No SPV companies were included in the sample. The model shows in-sample Gini AR about 73\% and gives a reasonable mapping to external ratings. The model can be used to calculate implied credit rating for Russian companies which many of them don't have one.
The model is aimed to discriminate the %DIFDELCMD < {\guillemotleft}%%% %DIFDELCMD < {\guillemotright} %%% %DIFDELCMD < {\guillemotleft}%%% %DIFDELCMD < {\guillemotright} %%% 'good' and the 'bad' companies in Russian corporate sector based on their financial statements data (Russian Accounting Standards). The data sample consists of 126 Russian public companies- issuers of Ruble bonds which represent about 36\% of total number of corporate bonds issuers. 25 companies have defaulted on their debt in 2008-2009 which represent around 30\% of default cases. 29\% companies in the sample have credit ratings assigned compared to 34\% in the parent population. No SPV companies were included in the sample. The model shows in-sample Gini AR about 73\% and gives a reasonable mapping to external ratings. The model can be used to calculate implied credit rating for Russian companies which many of them don't have one.
[ { "type": "D", "before": "good", "after": null, "start_char_pos": 72, "end_char_pos": 76 }, { "type": "D", "before": "and the", "after": null, "start_char_pos": 111, "end_char_pos": 118 }, { "type": "D", "before": "bad", "after": null, "start_char_pos": 151, "end_char_pos": 154 }, { "type": "A", "before": null, "after": "'good' and the 'bad'", "start_char_pos": 189, "end_char_pos": 189 }, { "type": "A", "before": null, "after": "Accounting", "start_char_pos": 278, "end_char_pos": 278 } ]
[ 0, 290, 442, 543, 644, 690, 787 ]
1004.0685
2
The model is aimed to discriminate the 'good' and the 'bad' companies in Russian corporate sector based on their financial statements data ( Russian Accounting Standards ) . The data sample consists of 126 Russian public companies- issuers of Ruble bonds which represent about 36\% of total number of corporate bonds issuers. 25 companies have defaulted on their debt in 2008-2009 which represent around 30\% of default cases. 29\% companies in the sample have credit ratings assigned compared to 34\% in the parent population. No SPV companies were included in the sample. The model shows in-sample Gini AR about 73\% and gives a reasonable mapping to external ratings. The model can be used to calculate implied credit rating for Russian companies which many of them don't have one .
The model is aimed to discriminate the 'good' and the 'bad' companies in Russian corporate sector based on their financial statements data based on Russian Accounting Standards . The data sample consists of 126 Russian public companies- issuers of Ruble bonds which represent about 36\% of total number of corporate bonds issuers. 25 companies have defaulted on their debt in 2008-2009 which represent around 30\% of default cases. No SPV companies were included in the sample. The model shows in-sample Gini AR about 73\% and gives a reasonable and simple rule of mapping to external ratings. The model can be used to calculate implied credit rating for Russian companies which many of them don't have .
[ { "type": "R", "before": "(", "after": "based on", "start_char_pos": 139, "end_char_pos": 140 }, { "type": "D", "before": ")", "after": null, "start_char_pos": 170, "end_char_pos": 171 }, { "type": "D", "before": "29\\% companies in the sample have credit ratings assigned compared to 34\\% in the parent population.", "after": null, "start_char_pos": 427, "end_char_pos": 527 }, { "type": "A", "before": null, "after": "and simple rule of", "start_char_pos": 642, "end_char_pos": 642 }, { "type": "D", "before": "one", "after": null, "start_char_pos": 781, "end_char_pos": 784 } ]
[ 0, 173, 325, 426, 527, 573, 671 ]
1004.0965
2
The Wako-Sait{\^o}-Mu\~noz-Eaton (WSME) model, initially introduced in the theory of protein folding, has also been used in modeling the RNA folding and some epitaxial phenomena. The advantage of this model is that it admits exact solution in the general inhomogeneous case (Bruscolini and Pelizzola, 2002) which facilitates the study of realistic systems. However, a shortcoming of this model is that it accounts only for interactions within contiguous stretches of native bonds or atomic chains while neglecting interstretch (interchain) interactions. But due to the biopolymer (atomic chain) flexibility, the monomers (atoms) separated by several non-native bonds along the sequence can become closely spaced. This produces their strong interaction. The inclusion of non-WSME interactions into the model makes the model more realistic and improves its performance. In this study we add arbitrary interactions of finite range and solve the new model by means of the transfer matrix technique. We can therefore exactly account for the interactions with the radii along the chain which in the proteomics are classified as medium- and moderately long-range interactions .
The Wako-Sait{\^o}-Mu\~noz-Eaton (WSME) model, initially introduced in the theory of protein folding, has also been used in modeling the RNA folding and some epitaxial phenomena. The advantage of this model is that it admits exact solution in the general inhomogeneous case (Bruscolini and Pelizzola, 2002) which facilitates the study of realistic systems. However, a shortcoming of the model is that it accounts only for interactions within continuous stretches of native bonds or atomic chains while neglecting interstretch (interchain) interactions. But due to the biopolymer (atomic chain) flexibility, the monomers (atoms) separated by several non-native bonds along the sequence can become closely spaced. This produces their strong interaction. The inclusion of non-WSME interactions into the model makes the model more realistic and improves its performance. In this study we add arbitrary interactions of finite range and solve the new model by means of the transfer matrix technique. We can therefore exactly account for the interactions which in proteomics are classified as medium- and moderately long-range ones .
[ { "type": "R", "before": "this", "after": "the", "start_char_pos": 383, "end_char_pos": 387 }, { "type": "R", "before": "contiguous", "after": "continuous", "start_char_pos": 443, "end_char_pos": 453 }, { "type": "R", "before": "with the radii along the chain which in the", "after": "which in", "start_char_pos": 1049, "end_char_pos": 1092 }, { "type": "R", "before": "interactions", "after": "ones", "start_char_pos": 1156, "end_char_pos": 1168 } ]
[ 0, 178, 356, 553, 712, 752, 867, 994 ]
1004.1138
1
We compute the analytic expression of the probability distributions F{FTSE100,+} and F{FTSE100,-} of the normalized positive and negative FTSE100 (UK) index daily returns r(t). Furthermore, we define the \alpha re-scaled FTSE100 daily index positive returns r(t) ^\alpha and negative returns (-r(t)) ^\alpha that we call, after normalization, the \alpha positive fluctuations and \alpha negative fluctuations. We use the Kolmogorov-Smirnov statistical test, as a method, to find the values of \alpha that optimize the data collapse of the histogram of the \alpha fluctuations with the Bramwell-Holdsworth-Pinton (BHP) probability density function. The optimal parameters that we found are \alpha +=0.55 and \alpha- =0.55. Since the BHP probability density function appears in several other dissimilar phenomena, our results reveal universality in the stock exchange markets.
We compute the analytic expression of the probability distributions F{FTSE100,+} and F{FTSE100,-} of the normalized positive and negative FTSE100 (UK) index daily returns r(t). Furthermore, we define the alpha re-scaled FTSE100 daily index positive returns r(t) ^alpha and negative returns (-r(t)) ^alpha that we call, after normalization, the alpha positive fluctuations and alpha negative fluctuations. We use the Kolmogorov-Smirnov statistical test, as a method, to find the values of alpha that optimize the data collapse of the histogram of the alpha fluctuations with the Bramwell-Holdsworth-Pinton (BHP) probability density function. The optimal parameters that we found are alpha +=0.55 and alpha- =0.55. Since the BHP probability density function appears in several other dissimilar phenomena, our results reveal universality in the stock exchange markets.
[ { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 204, "end_char_pos": 210 }, { "type": "R", "before": "^\\alpha", "after": "^alpha", "start_char_pos": 263, "end_char_pos": 270 }, { "type": "R", "before": "^\\alpha", "after": "^alpha", "start_char_pos": 300, "end_char_pos": 307 }, { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 347, "end_char_pos": 353 }, { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 380, "end_char_pos": 386 }, { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 493, "end_char_pos": 499 }, { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 556, "end_char_pos": 562 }, { "type": "R", "before": "\\alpha", "after": "alpha", "start_char_pos": 689, "end_char_pos": 695 }, { "type": "R", "before": "\\alpha-", "after": "alpha-", "start_char_pos": 707, "end_char_pos": 714 } ]
[ 0, 176, 409, 647, 721 ]
1004.1210
1
We compute the analytic expression of the probability distributions F{AEX,+} and F{AEX,-} of the normalized positive and negative AEX (Netherlands) index daily returns r(t). Furthermore, we define the \alpha re-scaled AEX daily index positive returns r(t) ^{\alpha and negative returns (-r(t)) ^{\alpha that we call, after normalization, the \alpha positive fluctuations and \alpha negative fluctuations. We use the Kolmogorov-Smirnov statistical test, as a method, to find the values of \alpha that optimize the data collapse of the histogram of the \alpha fluctuations with the Bramwell-Holdsworth-Pinton (BHP) probability density function. The optimal parameters that we found are \alpha+=0.46 and \alpha-=0.43. Since the BHP probability density function appears in several other dissimilar phenomena, our results reveal universality in the stock exchange markets.
We compute the analytic expression of the probability distributions F{AEX,+} and F{AEX,-} of the normalized positive and negative AEX (Netherlands) index daily returns r(t). Furthermore, we define the \alpha re-scaled AEX daily index positive returns r(t) ^\alpha and negative returns (-r(t)) ^\alpha that we call, after normalization, the \alpha positive fluctuations and \alpha negative fluctuations. We use the Kolmogorov-Smirnov statistical test, as a method, to find the values of \alpha that optimize the data collapse of the histogram of the \alpha fluctuations with the Bramwell-Holdsworth-Pinton (BHP) probability density function. The optimal parameters that we found are \alpha+=0.46 and \alpha-=0.43. Since the BHP probability density function appears in several other dissimilar phenomena, our results reveal universality in the stock exchange markets.
[ { "type": "R", "before": "^{\\alpha", "after": "^\\alpha", "start_char_pos": 256, "end_char_pos": 264 }, { "type": "R", "before": "^{\\alpha", "after": "^\\alpha", "start_char_pos": 294, "end_char_pos": 302 } ]
[ 0, 173, 404, 642, 714 ]
1004.2652
1
We show two universal, Boolean, deterministic logic schemes based on binary noise timefunctions that can be realized without time averaging units. The first scheme is based on a new bipolar random telegraph wave scheme and the second one makes use of the recent noise-based logic which is conjectured to be the brain's method of logic operations [Physics Letters A 373 (2009) 2338-2342 , arXiv:0902.2033 ]. For binary-valued logic operations, the two simple Boolean schemes presented in this paper use zero (no noise) for the logic Low (L) state. In the random telegraph wave-based scheme, for multi-valued logic operations, additive superpositions of logic states must be avoided, while multiplicative superpositions utilizing hyperspace base vectors can still be utilized. These modifications, while keeping the information richness of multi-valued (noise-based) logic, result in a significant speedup of logic operations for the same signal bandwidth. The logic hyperspace of the first scheme results random telegraph waves with the same statistical properties as the original noise bits. The hyperspace of the second scheme is the same as in the already published version .
We show two universal, Boolean, deterministic logic schemes based on binary noise timefunctions that can be realized without time-averaging units. The first scheme is based on a new bipolar random telegraph wave scheme and the second one makes use of the recent noise-based logic which is conjectured to be the brain's method of logic operations [Physics Letters A 373 (2009) 2338-2342 ]. Error propagation and error removal issues are also addressed .
[ { "type": "R", "before": "time averaging", "after": "time-averaging", "start_char_pos": 125, "end_char_pos": 139 }, { "type": "D", "before": ", arXiv:0902.2033", "after": null, "start_char_pos": 386, "end_char_pos": 403 }, { "type": "R", "before": "For binary-valued logic operations, the two simple Boolean schemes presented in this paper use zero (no noise) for the logic Low (L) state. In the random telegraph wave-based scheme, for multi-valued logic operations, additive superpositions of logic states must be avoided, while multiplicative superpositions utilizing hyperspace base vectors can still be utilized. These modifications, while keeping the information richness of multi-valued (noise-based) logic, result in a significant speedup of logic operations for the same signal bandwidth. The logic hyperspace of the first scheme results random telegraph waves with the same statistical properties as the original noise bits. The hyperspace of the second scheme is the same as in the already published version", "after": "Error propagation and error removal issues are also addressed", "start_char_pos": 407, "end_char_pos": 1175 } ]
[ 0, 146, 406, 546, 774, 954, 1091 ]
1004.3299
1
We study the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models . The standard Feynman-Kac theorem cannot be directly applied because the diffusion coefficients may degenerate on the boundaries of the state space and grow faster than linearly . We allow for various types of model behavior; for example, the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and asset-price processes may be strict local martingales under a given risk-neutral measure. Our main result is an extension of the standard Feynman-Kac theorem in the context of stochastic volatility models. Sharp results on the existence and uniqueness of classical solutions to the valuation equation are obtained using a combination of probabilistic and analytical techniques. The role of boundary conditions is also discussed .
We study the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models where the standard Feynman-Kac theorem cannot be directly applied because the diffusion coefficients may grow faster than linearly and degenerate on the boundaries of the state space . We allow for various types of model behavior; for example, the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and asset-price processes may be strict local martingales under a given risk-neutral measure. Our main result is the existence and uniqueness of classical solutions to the valuation equation . It is shown that the value function is the smallest nonnegative classical solution to the valuation equation. Moreover, when the payoff is of strictly sublinear growth, the uniqueness holds among the same class of functions; when the payoff is of linear growth, the uniqueness holds if and only if the asset-price process is a martingale .
[ { "type": "R", "before": ". The", "after": "where the", "start_char_pos": 139, "end_char_pos": 144 }, { "type": "A", "before": null, "after": "grow faster than linearly and", "start_char_pos": 240, "end_char_pos": 240 }, { "type": "D", "before": "and grow faster than linearly", "after": null, "start_char_pos": 289, "end_char_pos": 318 }, { "type": "R", "before": "an extension of the standard Feynman-Kac theorem in the context of stochastic volatility models. Sharp results on the", "after": "the", "start_char_pos": 606, "end_char_pos": 723 }, { "type": "R", "before": "are obtained using a combination of probabilistic and analytical techniques. The role of boundary conditions is also discussed", "after": ". It is shown that the value function is the smallest nonnegative classical solution to the valuation equation. Moreover, when the payoff is of strictly sublinear growth, the uniqueness holds among the same class of functions; when the payoff is of linear growth, the uniqueness holds if and only if the asset-price process is a martingale", "start_char_pos": 798, "end_char_pos": 924 } ]
[ 0, 140, 320, 366, 586, 702, 874 ]
1004.3299
2
We study the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models where the standard Feynman-Kac theorem cannot be directly applied because the diffusion coefficients may grow faster than linearly and degenerate on the boundaries of the state space. We allow for various types of model behavior ; for example, the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and asset-price processes may be strict local martingales under a given risk-neutral measure . Our main result is the existence and uniqueness of classical solutions to the valuation equation . It is shown that the value function is the smallest nonnegative classical solution to the valuation equation . Moreover, when the payoff is of strictly sublinear growth, the uniqueness holds among the same class of functions ; when the payoff is of linear growth , the uniqueness holds if and only if the asset-price process is a martingale.
We analyze the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models where the standard Feynman-Kac theorem cannot be directly applied because the diffusion coefficients may grow faster than linearly and degenerate on the boundaries of the state space. We allow for various types of model behavior : the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and asset-price process may be strict local martingale . Our main result is a necessary and sufficient condition on the uniqueness of classical solutions to the valuation equation : the value function is the unique nonnegative classical solution to the valuation equation among the functions with at most linear growth if and only if the asset-price is a martingale.
[ { "type": "R", "before": "study", "after": "analyze", "start_char_pos": 3, "end_char_pos": 8 }, { "type": "R", "before": "; for example,", "after": ":", "start_char_pos": 368, "end_char_pos": 382 }, { "type": "R", "before": "processes", "after": "process", "start_char_pos": 512, "end_char_pos": 521 }, { "type": "R", "before": "martingales under a given risk-neutral measure", "after": "martingale", "start_char_pos": 542, "end_char_pos": 588 }, { "type": "R", "before": "the existence and", "after": "a necessary and sufficient condition on the", "start_char_pos": 610, "end_char_pos": 627 }, { "type": "R", "before": ". It is shown that", "after": ":", "start_char_pos": 688, "end_char_pos": 706 }, { "type": "R", "before": "smallest", "after": "unique", "start_char_pos": 733, "end_char_pos": 741 }, { "type": "R", "before": ". Moreover, when the payoff is of strictly sublinear growth, the uniqueness holds among the same class of functions ; when the payoff is of linear growth , the uniqueness holds", "after": "among the functions with at most linear growth", "start_char_pos": 799, "end_char_pos": 975 }, { "type": "D", "before": "process", "after": null, "start_char_pos": 1007, "end_char_pos": 1014 } ]
[ 0, 322, 369, 590, 689, 800, 916 ]
1004.3299
3
We analyze the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models where the standard Feynman-Kac theorem cannot be directly applied because the diffusion coefficients may grow faster than linearly and degenerate on the boundaries of the state space. We allow for various types of model behavior: the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and asset-price process may be strict local martingale. Our main result is a necessary and sufficient condition on the uniqueness of classical solutions to the valuation equation: the value function is the unique nonnegative classical solution to the valuation equation among the functions with at most linear growth if and only if the asset-price is a martingale.
We analyze the valuation partial differential equation for European contingent claims in a general framework of stochastic volatility models where the diffusion coefficients may grow faster than linearly and degenerate on the boundaries of the state space. We allow for various types of model behavior: the volatility process in our model can potentially reach zero and either stay there or instantaneously reflect, and the asset-price process may be a strict local martingale. Our main result is a necessary and sufficient condition on the uniqueness of classical solutions to the valuation equation: the value function is the unique nonnegative classical solution to the valuation equation among functions with at most linear growth if and only if the asset-price is a martingale.
[ { "type": "D", "before": "standard Feynman-Kac theorem cannot be directly applied because the", "after": null, "start_char_pos": 151, "end_char_pos": 218 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 488, "end_char_pos": 488 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 516, "end_char_pos": 516 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 762, "end_char_pos": 765 } ]
[ 0, 324, 541 ]
1004.3525
1
We consider the exponential Levy models and we study the conditions under which f-minimal equivalent martingale measure preserves Levy property. Then we give a general formula for optimal strategy in a sense of utility maximization. Finally, we study change-point exponential Levy models, namely we give the conditions for the existence of f-minimal equivalent martingale measure and we obtain a general formula for optimal strategy from point of view of the utility maximization . We illustrate our results considering Black-Scholes model with change-point.
We study exponential Levy models with change-point which is a random variable, independent from initial Levy processes. On canonical space with initially enlarged filtration we describe all equivalent martingale measures for change-point model and we give the conditions for the existence of f-minimal equivalent martingale measure . Using the connection between utility maximisation and f-divergence minimisation, we obtain a general formula for optimal strategy in change-point case for initially enlarged filtration and also for progressively enlarged filtration when the utility is exponential . We illustrate our results considering the Black-Scholes model with change-point.
[ { "type": "R", "before": "consider the", "after": "study", "start_char_pos": 3, "end_char_pos": 15 }, { "type": "R", "before": "and we study the conditions under which f-minimal equivalent martingale measure preserves Levy property. Then we give a general formula for optimal strategy in a sense of utility maximization. Finally, we study", "after": "with change-point which is a random variable, independent from initial Levy processes. On canonical space with initially enlarged filtration we describe all equivalent martingale measures for", "start_char_pos": 40, "end_char_pos": 250 }, { "type": "R", "before": "exponential Levy models, namely", "after": "model and", "start_char_pos": 264, "end_char_pos": 295 }, { "type": "R", "before": "and", "after": ". Using the connection between utility maximisation and f-divergence minimisation,", "start_char_pos": 380, "end_char_pos": 383 }, { "type": "R", "before": "from point of view of the utility maximization", "after": "in change-point case for initially enlarged filtration and also for progressively enlarged filtration when the utility is exponential", "start_char_pos": 433, "end_char_pos": 479 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 520, "end_char_pos": 520 } ]
[ 0, 144, 232, 481 ]
1004.3525
2
We study exponential Levy models with change-point which is a random variable, independent from initial Levy processes. On canonical space with initially enlarged filtration we describe all equivalent martingale measures for change-point model and we give the conditions for the existence of f-minimal equivalent martingale measure. Using the connection between utility maximisation and f-divergence minimisation, we obtain a general formula for optimal strategy in change-point case for initially enlarged filtration and also for progressively enlarged filtration when the utility is exponential . We illustrate our results considering the Black-Scholes model with change-point.
We study exponential Levy models with change-point which is a random variable, independent from initial Levy processes. On canonical space with initially enlarged filtration we describe all equivalent martingale measures for change-point model and we give the conditions for the existence of f-divergence minimal equivalent martingale measure. Using the connection between utility maximisation and f-divergence minimisation, we obtain a general formula for optimal strategy in change-point case for initially enlarged filtration and also for progressively enlarged filtration in the case of exponential utility . We illustrate our results considering the Black-Scholes model with change-point.
[ { "type": "R", "before": "f-minimal", "after": "f-divergence minimal", "start_char_pos": 292, "end_char_pos": 301 }, { "type": "R", "before": "when the utility is exponential", "after": "in the case of exponential utility", "start_char_pos": 565, "end_char_pos": 596 } ]
[ 0, 119, 332, 598 ]
1004.4128
1
One-ports named "f-circuits", composed of similar conductors described by a monotonic polynomial, or quasi-polynomial (i.e. with positive but not necessarily integer, powers) characteristic i = f(v) are studied, focusing on the algebraic map f --> F. Here F(.) is the input conductivity characteristic; i.e., iin = F(vin) is the input current. The "power-law" "alfa-circuit" introduced in [1], for which f(v) ~ v^"alfa", is an important particular case. By means of a generalization of a parallel connection, the f-circuits are constructed from the alfa-circuits of the same topology, with different "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. We observe and consider an associated, generally approximated, but, in all of the cases studied, always high-precision, specific superposition. This superposition is in terms of f --> F, and it means that F(.) of the connection is close to the sum of the input currents of the independent "alfa"-circuits, all connected in parallel to the same source. In other words, F(.) is well approximated by a linear combination of the same degrees of the independent variable as in f(.), i.e. the map of the characteristics f --> F is close to a linear one. This unexpected result is useful for understanding nonlinear algebraic circuits, and is missed in the classical theory. The cases of f(v) = D1v + D2v^2 and f(v) = D1v + D3v^3, are analyzed in examples. Special topologies when the superposition must be ideal, are also considered. In the second part [2] of the work the "circuit mechanism" that is responsible for the high precision of the superposition, in the most general case, will be explained. A brief plan of the second part 2%DIFDELCMD < ] %%% is given.
One-ports named "f-circuits", composed of similar conductors described by a monotonic polynomial, or quasi-polynomial (i.e. with positive but not necessarily integer, powers) characteristic i = f(v) are studied, focusing on the algebraic map f --> F. Here F(.) is the input conductivity characteristic; i.e., iin = F(vin) is the input current. The "power-law" "alfa-circuit" introduced in [1], for which f(v) ~ v^"alfa", is an important particular case. By means of a generalization of a parallel connection, the f-circuits are constructed from the alfa-circuits of the same topology, with different "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. We observe and consider an associated, generally approximated, but, in all of the cases studied, always high-precision, specific superposition. This superposition is in terms of f --> F, and it means that F(.) of the connection is close to the sum of the input currents of the independent "alfa"-circuits, all connected in parallel to the same source. In other words, F(.) is well approximated by a linear combination of the same degrees of the independent variable as in f(.), i.e. the map of the characteristics f --> F is close to a linear one. This unexpected result is useful for understanding nonlinear algebraic circuits, and is missed in the classical theory. The cases of f(v) = D1v + D2v^2 and f(v) = D1v + D3v^3, are analyzed in examples. Special topologies when the superposition must be ideal, are also considered. In the second part [2] of the work the "circuit mechanism" that is responsible for the high precision of the superposition, in the most general case, will be explained. %DIFDELCMD < ] %%%
[ { "type": "D", "before": "A brief plan of the second part", "after": null, "start_char_pos": 1692, "end_char_pos": 1723 }, { "type": "D", "before": "2", "after": null, "start_char_pos": 1724, "end_char_pos": 1725 }, { "type": "D", "before": "is given.", "after": null, "start_char_pos": 1744, "end_char_pos": 1753 } ]
[ 0, 302, 343, 453, 694, 838, 1046, 1242, 1362, 1444, 1522, 1691 ]
1004.4153
1
We compute the improved bounds on the copula of a bivariate random vector when partial information is available, such as the values of the copula on the subset of [0,1]^2, or the value of a functional of the copula, monotone with respect to the concordance order. These results are then used to compute model-free bounds on the prices of two-asset options which make use of extra information about the dependence structure, such as the price of another two-asset option.
Improved bounds on the copula of a bivariate random vector are computed when partial information is available, such as the values of the copula on a given subset of [0,1]^2, or the value of a functional of the copula, monotone with respect to the concordance order. These results are then used to compute model-free bounds on the prices of two-asset options which make use of extra information about the dependence structure, such as the price of another two-asset option.
[ { "type": "R", "before": "We compute the improved", "after": "Improved", "start_char_pos": 0, "end_char_pos": 23 }, { "type": "A", "before": null, "after": "are computed", "start_char_pos": 74, "end_char_pos": 74 }, { "type": "R", "before": "the", "after": "a given", "start_char_pos": 150, "end_char_pos": 153 } ]
[ 0, 264 ]
1004.4327
1
KIF1A, a processive single headed kinesin superfamily motor, hydrolyzes Adenosine triphosphate (ATP) to move along a filamentous track called microtubule. The stochastic movement of KIF1A on the track is characterized by an alternating sequence of pause and translocation. The sum of the durations of pause and the following translocation defines the dwell time at the binding site on the microtubule. Using the NOSC model (Nishinari et. al. PRL, {\bf 95}, 118101 (2005)), which captures the Brownian ratchet mechanism of individual KIF1A along with its biochemical cycle, we systematically derive an analytical expression for the dwell time distribution. in between two consecutive steps each of which could be forward (+) or backward (-). We calculate the probability densities \pm\pm of these four conditional dwell times. However, for the convenience of comparison with experimental data, we also present the two distributions \pm^{*} of the times of dwell before a forward (+) and a backward (-) step. } In principle, our theoretical prediction can be tested by carrying out single-molecule experiments with adequate spatio-temporal resolution.
KIF1A, a processive single headed kinesin superfamily motor, hydrolyzes Adenosine triphosphate (ATP) to move along a filamentous track called microtubule. The stochastic movement of KIF1A on the track is characterized by an alternating sequence of pause and translocation. The sum of the durations of pause and the following translocation defines the dwell time at the binding site on the microtubule. Using the NOSC model (Nishinari et. al. PRL, {\bf 95}, 118101 (2005)), which captures the Brownian ratchet mechanism of individual KIF1A along with its biochemical cycle, we systematically derive an analytical expression for the dwell time distribution. More detailed information is contained in the probability densities of the "conditional dwell times" \tau_{\pm\pm in between two consecutive steps each of which could be forward (+) or backward (-). We calculate the probability densities \pm\pm of these four conditional dwell times. However, for the convenience of comparison with experimental data, we also present the two distributions \pm^{*} of the times of dwell before a forward (+) and a backward (-) step. } In principle, our theoretical prediction can be tested by carrying out single-molecule experiments with adequate spatio-temporal resolution.
[ { "type": "A", "before": null, "after": "More detailed information is contained in the probability densities of the \"conditional dwell times\" \\tau_{\\pm\\pm", "start_char_pos": 656, "end_char_pos": 656 } ]
[ 0, 154, 272, 401, 741, 826, 1007 ]
1004.4327
2
KIF1A, a processive single headed kinesin superfamily motor, hydrolyzes Adenosine triphosphate (ATP) to move along a filamentous track called microtubule. The stochastic movement of KIF1A on the track is characterized by an alternating sequence of pause and translocation. The sum of the durations of pause and the following translocation defines the dwell time at the binding site on the microtubule . Using the NOSC model (Nishinari et. al. PRL, {\bf 95}, 118101 (2005)) , which captures the Brownian ratchet mechanism of individual KIF1A along with its biochemical cycle , we systematically derive an analytical expression for the dwell time distribution. More detailed information is contained in the probability densities of the "conditional dwell times" \pm\pm in between two consecutive steps each of which could be forward (+) or backward (-). We calculate the probability densities \pm\pm of these four conditional dwell times. However, for the convenience of comparison with experimental data, we also present the two distributions \pm^{*} of the times of dwell before a forward (+) and a backward (-) step. In principle, our theoretical prediction can be tested by carrying out single-molecule experiments with adequate spatio-temporal resolution.
KIF1A, a processive single headed kinesin superfamily motor, hydrolyzes Adenosine triphosphate (ATP) to move along a filamentous track called microtubule. The stochastic movement of KIF1A on the track is characterized by an alternating sequence of pause and translocation. The sum of the durations of pause and the following translocation defines the dwell time . Using the NOSC model (Nishinari et. al. PRL, {\bf 95}, 118101 (2005)) of individual KIF1A , we systematically derive an analytical expression for the dwell time distribution. More detailed information is contained in the probability densities of the "conditional dwell times" \pm\pm in between two consecutive steps each of which could be forward (+) or backward (-). We calculate the probability densities \pm\pm of these four conditional dwell times. However, for the convenience of comparison with experimental data, we also present the two distributions \pm^{*} of the times of dwell before a forward (+) and a backward (-) step. In principle, our theoretical prediction can be tested by carrying out single-molecule experiments with adequate spatio-temporal resolution.
[ { "type": "D", "before": "at the binding site on the microtubule", "after": null, "start_char_pos": 362, "end_char_pos": 400 }, { "type": "D", "before": ", which captures the Brownian ratchet mechanism", "after": null, "start_char_pos": 473, "end_char_pos": 520 }, { "type": "D", "before": "along with its biochemical cycle", "after": null, "start_char_pos": 541, "end_char_pos": 573 } ]
[ 0, 154, 272, 658, 851, 936, 1117 ]
1004.4428
1
This is the second part, after [1], of the research devoted to analysis of 1-ports composed of similar conductors ("f-circuits") described by the characteristic i = f(v) of a polynomial type. This analysis is performed by means of the power-law "alfa" -circuit " introduced in [2], for which f(v) ~ v^"alfa". The f-circuits are constructed from the f-circuits of the same topology, with the proper "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. Explaining the situation described in detail in [1], we note and analyze a simple "circuit mechanism" that causes the difference between the input current of the f-circuit and the sum of the input currents of the f-circuits before the composition to be relatively small. The case of two degrees, f(v) = Dmv^m + Dnv^n, m unequal n, is treated in the main proofs. Some simulations are presented, and some boundaries for the error of the superposition are found. The cases of f(.) being a polynomial of the third or fourth degrees are finally briefly considered.
This is the second part, after [1], of the research devoted to analysis of 1-ports composed of similar conductors ("f-circuits") described by the characteristic i = f(v) of a polynomial type. This analysis is performed by means of the power-law "alfa" -circuits " introduced in [2], for which f(v) ~ v^"alfa". The f-circuits are constructed from the "alfa"-circuits of the same topology, with the proper "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. Explaining the situation described in detail in [1], we note and analyze a simple "circuit mechanism" that causes the difference between the input current of the f-circuit and the sum of the input currents of the f-circuits before the composition to be relatively small. The case of two degrees, f(v) = Dmv^m + Dnv^n, m unequal n, is treated in the main proofs. Some simulations are presented, and some boundaries for the error of the superposition are found. The cases of f(.) being a polynomial of the third or fourth degrees are finally briefly considered.
[ { "type": "R", "before": "-circuit", "after": "-circuits", "start_char_pos": 252, "end_char_pos": 260 }, { "type": "R", "before": "f-circuits", "after": "\"alfa\"-circuits", "start_char_pos": 349, "end_char_pos": 359 } ]
[ 0, 191, 308, 492, 763, 854, 952 ]
1004.4431
1
Exploiting the performance of today's processors requires , apart from an intimate knowledge of the microarchitecture , taking into account the influence of an ever-growing complexity in thread and cache topology. LIKWID is a collection of small command line applications that support inexperienced as well as seasoned programmers in developing and running software in an efficient way. The development of LIKWID is targeted on providing access to performance-oriented tooling in a transparent and easy manner. We present the four tools that comprise LIKWID and show the influence of thread pinning on performance using the well-known OpenMP STREAM triad benchmark . On the example of a stencil code specifically optimized to utilize cache topology we demonstrate the usage of likwid-pin and likwid-perfCtr .
Exploiting the performance of today's processors requires intimate knowledge of the microarchitecture as well as an awareness of the ever-growing complexity in thread and cache topology. LIKWID is a set of command-line utilities that addresses four key problems: Probing the thread and cache topology of a shared-memory node, enforcing thread-core affinity on a program, measuring performance counter metrics, and toggling hardware prefetchers. An API for using the performance counting features from user code is also included. We clearly state the differences to the widely used PAPI interface. To demonstrate the capabilities of the tool set we show the influence of thread pinning on performance using the well-known OpenMP STREAM triad benchmark , and use the affinity and hardware counter tools to study the performance of a stencil code specifically optimized to utilize shared caches on multicore chips .
[ { "type": "D", "before": ", apart from an", "after": null, "start_char_pos": 58, "end_char_pos": 73 }, { "type": "R", "before": ", taking into account the influence of an", "after": "as well as an awareness of the", "start_char_pos": 118, "end_char_pos": 159 }, { "type": "R", "before": "collection of small command line applications that support inexperienced as well as seasoned programmers in developing and running software in an efficient way. The development of LIKWID is targeted on providing access to performance-oriented tooling in a transparent and easy manner. We present the four tools that comprise LIKWID and", "after": "set of command-line utilities that addresses four key problems: Probing the thread and cache topology of a shared-memory node, enforcing thread-core affinity on a program, measuring performance counter metrics, and toggling hardware prefetchers. An API for using the performance counting features from user code is also included. We clearly state the differences to the widely used PAPI interface. To demonstrate the capabilities of the tool set we", "start_char_pos": 226, "end_char_pos": 561 }, { "type": "R", "before": ". On the example", "after": ", and use the affinity and hardware counter tools to study the performance", "start_char_pos": 665, "end_char_pos": 681 }, { "type": "R", "before": "cache topology we demonstrate the usage of likwid-pin and likwid-perfCtr", "after": "shared caches on multicore chips", "start_char_pos": 734, "end_char_pos": 806 } ]
[ 0, 213, 386, 510, 666 ]
1004.4709
1
In this paper, we address the problem of content placement in peer-to-peer systems, with the objective of maximizing the utilization of peers' uplink bandwidth resources. We consider system performance under a many-user asymptotic. We identify optimal content placement strategies in a particular scenario of limited content catalogue , casting the problem into the framework of loss networks . We then turn to an alternative "large catalogue" scaling where the catalogue size grows with the peer population. Relating the system performance to properties of a specific random graph model, we establish a content placement strategy which again maximizes system performance , provided storage space per peer grows unboundedly, although arbitrarily slowly, with system size.
In this paper, we address the problem of content placement in peer-to-peer systems, with the objective of maximizing the utilization of peers' uplink bandwidth resources. We consider system performance under a many-user asymptotic. We distinguish two scenarios, namely "Distributed Server Networks" (DSN) for which requests are exogenous to the system, and "Pure P2P Networks" (PP2PN) for which requests emanate from the peers themselves. For both scenarios, we consider a loss network model of performance, and determine asymptotically optimal content placement strategies in the case of a limited content catalogue . We then turn to an alternative "large catalogue" scaling where the catalogue size scales with the peer population. Under this scaling, we establish that storage space per peer must necessarily grow unboundedly if bandwidth utilization is to be maximized. Relating the system performance to properties of a specific random graph model, we then identify a content placement strategy and a request acceptance policy which jointly maximize bandwidth utilization , provided storage space per peer grows unboundedly, although arbitrarily slowly, with system size.
[ { "type": "R", "before": "identify", "after": "distinguish two scenarios, namely \"Distributed Server Networks\" (DSN) for which requests are exogenous to the system, and \"Pure P2P Networks\" (PP2PN) for which requests emanate from the peers themselves. For both scenarios, we consider a loss network model of performance, and determine asymptotically", "start_char_pos": 235, "end_char_pos": 243 }, { "type": "R", "before": "a particular scenario of", "after": "the case of a", "start_char_pos": 284, "end_char_pos": 308 }, { "type": "D", "before": ", casting the problem into the framework of loss networks", "after": null, "start_char_pos": 335, "end_char_pos": 392 }, { "type": "R", "before": "grows", "after": "scales", "start_char_pos": 477, "end_char_pos": 482 }, { "type": "A", "before": null, "after": "Under this scaling, we establish that storage space per peer must necessarily grow unboundedly if bandwidth utilization is to be maximized.", "start_char_pos": 509, "end_char_pos": 509 }, { "type": "R", "before": "establish", "after": "then identify", "start_char_pos": 593, "end_char_pos": 602 }, { "type": "R", "before": "which again maximizes system performance", "after": "and a request acceptance policy which jointly maximize bandwidth utilization", "start_char_pos": 632, "end_char_pos": 672 } ]
[ 0, 170, 231, 394, 508 ]
1004.5048
1
We predict that multi-scale correlations of amino acid positions within protein sequences generically enhance protein propensity for promiscuous binding. We show that the enhanced sequence correlations are evolutionary selected URLanismal proteomes. We predict analytically the robustness of this biophysical effect with respect to the form of the interaction potential .
We predict that diagonal correlations of amino acid positions within protein sequences statistically enhance protein propensity for promiscuous binding. Diagonal correlations represent statistically significant repeats of sequence patterns where amino acids of the same type are clustered together. The predicted effect is qualitatively robust with respect to the form of the microscopic interaction potential and the average amino acid composition. We suggest experimental and bioinformatics approaches to test the predicted effect .
[ { "type": "R", "before": "multi-scale", "after": "diagonal", "start_char_pos": 16, "end_char_pos": 27 }, { "type": "R", "before": "generically", "after": "statistically", "start_char_pos": 90, "end_char_pos": 101 }, { "type": "R", "before": "We show that the enhanced sequence correlations are evolutionary selected URLanismal proteomes. We predict analytically the robustness of this biophysical effect", "after": "Diagonal correlations represent statistically significant repeats of sequence patterns where amino acids of the same type are clustered together. The predicted effect is qualitatively robust", "start_char_pos": 154, "end_char_pos": 315 }, { "type": "R", "before": "interaction potential", "after": "microscopic interaction potential and the average amino acid composition. We suggest experimental and bioinformatics approaches to test the predicted effect", "start_char_pos": 348, "end_char_pos": 369 } ]
[ 0, 153, 249 ]
1004.5048
2
We predict that diagonal correlations of amino acid positions within protein sequences statistically enhance protein propensity for promiscuous binding. Diagonal correlations represent statistically significant repeats of sequence patterns where amino acids of the same type are clustered together. The predicted effect is qualitatively robust with respect to the form of the microscopic interaction potential and the average amino acid composition. We suggest experimental and bioinformatics approaches to test ] the predicted effect.
We predict analytically that diagonal correlations of amino acid positions within protein sequences statistically enhance protein propensity for nonspecific binding. We use the term 'promiscuity' to describe such nonspecific binding. Diagonal correlations represent statistically significant repeats of sequence patterns where amino acids of the same type are clustered together. The predicted effect is qualitatively robust with respect to the form of the microscopic interaction potentials and the average amino acid composition. Our analytical results provide an explanation for the enhanced diagonal correlations observed in hubs of URLanismal proteomes J. Mol. Biol. 409, 439 (2011)]. We suggest experiments that will allow direct testing of the predicted effect.
[ { "type": "A", "before": null, "after": "analytically", "start_char_pos": 11, "end_char_pos": 11 }, { "type": "R", "before": "promiscuous binding.", "after": "nonspecific binding. We use the term 'promiscuity' to describe such nonspecific binding.", "start_char_pos": 133, "end_char_pos": 153 }, { "type": "R", "before": "potential", "after": "potentials", "start_char_pos": 401, "end_char_pos": 410 }, { "type": "R", "before": "We suggest experimental and bioinformatics approaches to test", "after": "Our analytical results provide an explanation for the enhanced diagonal correlations observed in hubs of URLanismal proteomes", "start_char_pos": 451, "end_char_pos": 512 }, { "type": "A", "before": null, "after": "J. Mol. Biol. 409, 439 (2011)", "start_char_pos": 513, "end_char_pos": 513 }, { "type": "A", "before": null, "after": ". We suggest experiments that will allow direct testing of", "start_char_pos": 514, "end_char_pos": 514 } ]
[ 0, 153, 299, 450 ]
1004.5524
1
The framework of this paper is that of uncertainty, that is when no reference probability measure is given. To every convex regular risk measure \rho on {\cal C}_b(\Omega), we associate a canonical c_{\rho of probability measures . Furthermore{\cal the convex risk measure admits a dual representation in terms of a weakly relatively compact set of probability measures absolutely continuous with respect to some probability measure belonging to the canonical c_{\rho . To get these results we study the topological properties of the dual of the Banach space L^1(c) associated to some capacity cand we prove a representation Theorem for convex risk measures on L^1(c). As applications, we obtain that every G-expectation \E (resp. in case of uncertain volatility every sublinear risk measure \rho), admits a representation with a numerable family of probability measures absolutely continuous with respect to some P belonging to the canonical c-class, with c(f )= \E(|f|) , (resp. \rho(-|f|)) .
The framework of this paper is that of risk measuring under uncertainty, which is when no reference probability measure is given. To every regular convex risk measure on {\cal C}_b(\Omega), we associate a unique equivalence class of probability measures on Borel sets, characterizing the riskless non positive elements of{\cal C _b(\Omega). We prove that the convex risk measure has a dual representation with a countable set of probability measures absolutely continuous with respect to a certain probability measure in this class . To get these results we study the topological properties of the dual of the Banach space L^1(c) associated to a capacity c. As application we obtain that every G-expectation \E has a representation with a countable set of probability measures absolutely continuous with respect to a probability measure P such that P(|f| )= 0 iff \E(|f|) =0. We also apply our results to the case of uncertain volatility .
[ { "type": "R", "before": "uncertainty, that", "after": "risk measuring under uncertainty, which", "start_char_pos": 39, "end_char_pos": 56 }, { "type": "R", "before": "convex regular risk measure \\rho", "after": "regular convex risk measure", "start_char_pos": 117, "end_char_pos": 149 }, { "type": "R", "before": "canonical c_{\\rho", "after": "unique equivalence class", "start_char_pos": 188, "end_char_pos": 205 }, { "type": "R", "before": ". Furthermore", "after": "on Borel sets, characterizing the riskless non positive elements of", "start_char_pos": 230, "end_char_pos": 243 }, { "type": "A", "before": null, "after": "C", "start_char_pos": 249, "end_char_pos": 249 }, { "type": "A", "before": null, "after": "_b(\\Omega). We prove that", "start_char_pos": 250, "end_char_pos": 250 }, { "type": "R", "before": "admits", "after": "has", "start_char_pos": 275, "end_char_pos": 281 }, { "type": "R", "before": "in terms of a weakly relatively compact", "after": "with a countable", "start_char_pos": 304, "end_char_pos": 343 }, { "type": "R", "before": "some probability measure belonging to the canonical c_{\\rho", "after": "a certain probability measure in this class", "start_char_pos": 410, "end_char_pos": 469 }, { "type": "R", "before": "some capacity cand we prove a representation Theorem for convex risk measures on L^1(c). As applications,", "after": "a capacity c. As application", "start_char_pos": 582, "end_char_pos": 687 }, { "type": "R", "before": "(resp. in case of uncertain volatility every sublinear risk measure \\rho), admits", "after": "has", "start_char_pos": 726, "end_char_pos": 807 }, { "type": "R", "before": "numerable family", "after": "countable set", "start_char_pos": 832, "end_char_pos": 848 }, { "type": "R", "before": "some P belonging to the canonical c-class, with c(f", "after": "a probability measure P such that P(|f|", "start_char_pos": 911, "end_char_pos": 962 }, { "type": "A", "before": null, "after": "0 iff", "start_char_pos": 966, "end_char_pos": 966 }, { "type": "R", "before": ", (resp. \\rho(-|f|))", "after": "=0. We also apply our results to the case of uncertain volatility", "start_char_pos": 975, "end_char_pos": 995 } ]
[ 0, 107, 231, 471, 670, 983 ]
1005.0182
1
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The model follows a "zero intelligence" approach where the actions of the traders are related to a stochastic variable, the market sentiment , which we define as a mixture of public and private information. The model, despite the parsimonious approach , is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, " the market sentiment " , which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents , is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
[ { "type": "R", "before": "model follows a \"zero intelligence\" approach where the actions of the traders", "after": "agents follow a noise decision making process where their actions", "start_char_pos": 236, "end_char_pos": 313 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 352, "end_char_pos": 352 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 374, "end_char_pos": 374 }, { "type": "R", "before": "the parsimonious approach", "after": "making just few basic assumptions over the trading strategies of the agents", "start_char_pos": 460, "end_char_pos": 485 } ]
[ 0, 231, 440 ]
1005.0496
1
We develop a non-life reserving model using a stable-1/2 random bridge to simulate the accumulation of paid claims, allowing for an arbitrary choice of a priori distribution for the ultimate loss. Taking a Bayesian approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The ` best-estimate ultimate loss process ' is given by the conditional expectation of the ultimate loss. We derive explicit expressions for the best-estimate ultimate loss process, and for expected recoveries arising from aggregate excess-of-loss reinsurance treaties. Use of a deterministic time change allows for the matching of any initial (increasing) development pattern for the paid claims. We show that these methods are well-suited to the modelling of claims where there is a non-trivial probability of catastrophic loss. The generalized inverse-Gaussian (GIG) distribution is shown to be a natural choice for the a priori ultimate loss distribution. For particular GIG parameter choices, the best-estimate ultimate loss process can be written as a rational function of the paid-claims process. We extend the model to include a second paid-claims process, and allow the two processes to be dependent. The results obtained can be applied to the modelling of multiple lines of business or multiple origin years. The multidimensional model has the attractive property that the dimensionality of calculations remains low, regardless of the number of paid-claims processes. An algorithm is provided for the simulation of the paid-claims processes.
We develop a class of non-life reserving models using a stable-1/2 random bridge to simulate the accumulation of paid claims, allowing for an essentially arbitrary choice of a priori distribution for the ultimate loss. Taking an information-based approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The " best-estimate ultimate loss process " is given by the conditional expectation of the ultimate loss. We derive explicit expressions for the best-estimate ultimate loss process, and for expected recoveries arising from aggregate excess-of-loss reinsurance treaties. Use of a deterministic time change allows for the matching of any initial (increasing) development pattern for the paid claims. We show that these methods are well-suited to the modelling of claims where there is a non-trivial probability of catastrophic loss. The generalized inverse-Gaussian (GIG) distribution is shown to be a natural choice for the a priori ultimate loss distribution. For particular GIG parameter choices, the best-estimate ultimate loss process can be written as a rational function of the paid-claims process. We extend the model to include a second paid-claims process, and allow the two processes to be dependent. The results obtained can be applied to the modelling of multiple lines of business or multiple origin years. The multi-dimensional model has the property that the dimensionality of calculations remains low, regardless of the number of paid-claims processes. An algorithm is provided for the simulation of the paid-claims processes.
[ { "type": "A", "before": null, "after": "class of", "start_char_pos": 13, "end_char_pos": 13 }, { "type": "R", "before": "model", "after": "models", "start_char_pos": 33, "end_char_pos": 38 }, { "type": "A", "before": null, "after": "essentially", "start_char_pos": 133, "end_char_pos": 133 }, { "type": "R", "before": "a Bayesian", "after": "an information-based", "start_char_pos": 206, "end_char_pos": 216 }, { "type": "R", "before": "`", "after": "\"", "start_char_pos": 332, "end_char_pos": 333 }, { "type": "R", "before": "'", "after": "\"", "start_char_pos": 370, "end_char_pos": 371 }, { "type": "R", "before": "multidimensional", "after": "multi-dimensional", "start_char_pos": 1351, "end_char_pos": 1367 }, { "type": "D", "before": "attractive", "after": null, "start_char_pos": 1382, "end_char_pos": 1392 } ]
[ 0, 198, 327, 433, 597, 725, 858, 987, 1131, 1237, 1346, 1505 ]
1005.0877
1
Detrending moving average (DMA) is a widely used method to quantify the correlation of non-stationary signals. We generalize DMA to multifractal detrending moving average (MFDMA) , and then extend one-dimensional MFDMA to two-dimensional version. In the paper, we elaborate one-dimensional and two-dimensional MFDMA theoretically and apply the methods to synthetic multifractal measures . We find that the numerical estimations of the multifractal scaling exponent \tau(q) and the multifractal spectrum f(\alpha) are in good agreement with the theoretical values. We also compare the performance of MFDMA with MFDFA, and report that MFDMA is superior to MFDFA when apply them to analysis the properties of one-dimensional and two-dimensional multifractal measures .
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of non-stationary time series and the long-range correlations of fractal surfaces, which contains a parameter \theta determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (\theta=0), centered (\theta=0.5), and forward (\theta=1) detrending windows . We find that the estimated multifractal scaling exponent \tau(q) and the singularity spectrum f(\alpha) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis (MFDFA). The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed .
[ { "type": "R", "before": "Detrending", "after": "The detrending", "start_char_pos": 0, "end_char_pos": 10 }, { "type": "A", "before": null, "after": "algorithm", "start_char_pos": 32, "end_char_pos": 32 }, { "type": "R", "before": "method", "after": "technique", "start_char_pos": 50, "end_char_pos": 56 }, { "type": "R", "before": "correlation", "after": "long-term correlations", "start_char_pos": 73, "end_char_pos": 84 }, { "type": "R", "before": "signals. We generalize DMA to", "after": "time series and the long-range correlations of fractal surfaces, which contains a parameter \\theta determining the position of the detrending window. We develop", "start_char_pos": 103, "end_char_pos": 132 }, { "type": "R", "before": ", and then extend", "after": "algorithms for the analysis of", "start_char_pos": 180, "end_char_pos": 197 }, { "type": "R", "before": "MFDMA to two-dimensional version. In the paper, we elaborate", "after": "multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the", "start_char_pos": 214, "end_char_pos": 274 }, { "type": "R", "before": "theoretically and apply the methods to", "after": "methods is investigated using", "start_char_pos": 317, "end_char_pos": 355 }, { "type": "A", "before": null, "after": "with analytical solutions for backward (\\theta=0), centered (\\theta=0.5), and forward (\\theta=1) detrending windows", "start_char_pos": 388, "end_char_pos": 388 }, { "type": "R", "before": "numerical estimations of the", "after": "estimated", "start_char_pos": 408, "end_char_pos": 436 }, { "type": "R", "before": "multifractal", "after": "singularity", "start_char_pos": 483, "end_char_pos": 495 }, { "type": "R", "before": "We also compare the performance of MFDMA with MFDFA, and report that MFDMA is superior to MFDFA when apply them to analysis the properties of one-dimensional and two-dimensional multifractal measures", "after": "In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis (MFDFA). The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed", "start_char_pos": 566, "end_char_pos": 765 } ]
[ 0, 111, 247, 390, 565 ]
1005.1356
1
Based on a point of view that solvency and security are first, this paper considers optimal control and financial valuation problems of a large insurance company facing positive transaction cost asked by reinsurer . The company controls proportional reinsurance and dividend pay-out policy to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. The paper aims at finding explicitly value function and an optimal control policy of the company by using stochastic analysis and PDE methods. The results present the best equilibrium point between maximization of dividend pay-outs and minimization of risks .
Based on a point of view that solvency and security are first, this paper considers regular-singular stochastic optimal control problem of a large insurance company facing positive transaction cost asked by reinsurer under solvency constraint . The company controls proportional reinsurance and dividend pay-out policy to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. The paper aims at deriving the optimal retention ratio, dividend payout level, explicit value function of the insurance company via stochastic analysis and PDE methods. The results present the best equilibrium point between maximization of dividend pay-outs and minimization of risks . Moreover, the paper also gets a risk-based capital standard to ensure the capital requirement of can cover the total given risk and discusses how the model parameters, such as, volatility, premium rate, and risk level, impact on risk-based capital standard, optimal retention ratio, optimal dividend payout level and the company's profit by numerical results .
[ { "type": "R", "before": "optimal control and financial valuation problems", "after": "regular-singular stochastic optimal control problem", "start_char_pos": 84, "end_char_pos": 132 }, { "type": "A", "before": null, "after": "under solvency constraint", "start_char_pos": 214, "end_char_pos": 214 }, { "type": "R", "before": "finding explicitly value function and an optimal control policy of the company by using", "after": "deriving the optimal retention ratio, dividend payout level, explicit value function of the insurance company via", "start_char_pos": 403, "end_char_pos": 490 }, { "type": "A", "before": null, "after": ". Moreover, the paper also gets a risk-based capital standard to ensure the capital requirement of can cover the total given risk and discusses how the model parameters, such as, volatility, premium rate, and risk level, impact on risk-based capital standard, optimal retention ratio, optimal dividend payout level and the company's profit by numerical results", "start_char_pos": 643, "end_char_pos": 643 } ]
[ 0, 216, 384, 527 ]
1005.1356
2
Based on a point of view that solvency and security are first, this paper considers regular-singular stochastic optimal control problem of a large insurance company facing positive transaction cost asked by reinsurer under solvency constraint. The company controls proportional reinsurance and dividend pay-out policy to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. The paper aims at deriving the optimal retention ratio, dividend payout level, explicit value function of the insurance company via stochastic analysis and PDE methods. The results present the best equilibrium point between maximization of dividend pay-outs and minimization of risks. Moreover, the paper also gets a risk-based capital standard to ensure the capital requirement of can cover the total given risk and discusses how the model parameters, such as, volatility, premium rate, and risk level, impact on risk-based capital standard, optimal retention ratio, optimal dividend payout level and the company's profit by numerical results .
Based on a point of view that solvency and security are first, this paper considers regular-singular stochastic optimal control problem of a large insurance company facing positive transaction cost asked by reinsurer under solvency constraint. The company controls proportional reinsurance and dividend pay-out policy to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. The paper aims at deriving the optimal retention ratio, dividend payout level, explicit value function of the insurance company via stochastic analysis and PDE methods. The results present the best equilibrium point between maximization of dividend pay-outs and minimization of risks. The paper also gets a risk-based capital standard to ensure the capital requirement of can cover the total given risk . We present numerical results to make analysis how the model parameters, such as, volatility, premium rate, and risk level, impact on risk-based capital standard, optimal retention ratio, optimal dividend payout level and the company's profit .
[ { "type": "R", "before": "Moreover, the", "after": "The", "start_char_pos": 697, "end_char_pos": 710 }, { "type": "R", "before": "and discusses", "after": ". We present numerical results to make analysis", "start_char_pos": 825, "end_char_pos": 838 }, { "type": "D", "before": "by numerical results", "after": null, "start_char_pos": 1035, "end_char_pos": 1055 } ]
[ 0, 243, 411, 580, 696 ]