doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1011.3685
2
This paper deals with multidimensional dynamic risk measures induced by conditional g-expectations. A notion of multidimensional g-expectation is proposed to provide a multidimensional version of nonlinear expectations. By a technical result on explicit expressions for the comparison theorem, uniqueness theorem and viability on a rectangle of solutions to multidimensional backward stochastic differential equations, some necessary and sufficient conditions are given for the constancy, monotonicity, positivity, homogeneity and translatability properties of multidimensional conditional g-expectations and multidimensional dynamic risk measures; we prove that a multidimensional dynamic g-risk measure is nonincreasingly convex if and only if the generator g satisfies a quasi-monotone increasingly convex condition. A general dual representation is given for the multidimensional dynamic convex g-risk measure in which the penalty term is expressed more precisely. Similarly to the one dimensional case, a sufficient condition for a multidimensional dynamic risk measure to be a g-expectation is also explored . As to applications, we show how this multidimensional approach can be applied to measure the insolvency risk of a firm with interacted subsidiaries \protect .
This paper deals with multidimensional dynamic risk measures induced by conditional g-expectations. A notion of multidimensional g-expectation is proposed to provide a multidimensional version of nonlinear expectations. By a technical result on explicit expressions for the comparison theorem, uniqueness theorem and viability on a rectangle of solutions to multidimensional backward stochastic differential equations, some necessary and sufficient conditions are given for the constancy, monotonicity, positivity, homogeneity and translatability properties of multidimensional conditional g-expectations and multidimensional dynamic risk measures; we prove that a multidimensional dynamic g-risk measure is nonincreasingly convex if and only if the generator g satisfies a quasi-monotone increasingly convex condition. A general dual representation is given for the multidimensional dynamic convex g-risk measure in which the penalty term is expressed more precisely. It is shown that model uncertainty leads to the convexity of risk measures . As to applications, we show how this multidimensional approach can be applied to measure the insolvency risk of a firm with interacted subsidiaries ; optimal risk sharing for\protect\gamma -tolerant g-risk measures is investigated. Insurance g-risk measure and other ways to induce g-risk measures are also studied at the end of the paper .
[ { "type": "R", "before": "Similarly to the one dimensional case, a sufficient condition for a multidimensional dynamic risk measure to be a g-expectation is also explored", "after": "It is shown that model uncertainty leads to the convexity of risk measures", "start_char_pos": 969, "end_char_pos": 1113 }, { "type": "A", "before": null, "after": "; optimal risk sharing for", "start_char_pos": 1264, "end_char_pos": 1264 }, { "type": "A", "before": null, "after": "\\gamma -tolerant g-risk measures is investigated. Insurance g-risk measure and other ways to induce g-risk measures are also studied at the end of the paper", "start_char_pos": 1272, "end_char_pos": 1272 } ]
[ 0, 99, 219, 648, 819, 968, 1115 ]
1011.3736
1
We present a novel approach to the pricing of financial instruments in emission markets, for example, the EU ETS. The proposed hybrid model is positioned between existing complex full equilibrium models and pure risk-neutral models. Using an exogenously specified demand for a polluting good it gives a causal explanation for the accumulation of CO2 emissions and takes into account the feedback effect from the cost of carbon to the rate at which the market emits CO2. We derive a forward-backward stochastic differential equation for the price process of the allowance certificate and solve the associated semilinear partial differential equation numerically. We also show that derivatives written on the allowance certificate satisfy a linear partial differential equation. The model is extended to emission markets with multiple compliance periods and we analyse the impact different intertemporal connecting mechanisms, such as borrowing, banking and withdrawal, have on the allowance price.
We present a novel approach to the pricing of financial instruments in emission markets, for example, the EU ETS. The proposed structural model is positioned between existing complex full equilibrium models and pure reduced form models. Using an exogenously specified demand for a polluting good it gives a causal explanation for the accumulation of CO2 emissions and takes into account the feedback effect from the cost of carbon to the rate at which the market emits CO2. We derive a forward-backward stochastic differential equation for the price process of the allowance certificate and solve the associated semilinear partial differential equation numerically. We also show that derivatives written on the allowance certificate satisfy a linear partial differential equation. The model is extended to emission markets with multiple compliance periods and we analyse the impact different intertemporal connecting mechanisms, such as borrowing, banking and withdrawal, have on the allowance price.
[ { "type": "R", "before": "hybrid", "after": "structural", "start_char_pos": 127, "end_char_pos": 133 }, { "type": "R", "before": "risk-neutral", "after": "reduced form", "start_char_pos": 212, "end_char_pos": 224 } ]
[ 0, 113, 232, 469, 661, 776 ]
1011.3975
1
This paper approaches the problem of computing the price of Monthly Sum derivative in a framework of Cumulant Expansion .
Cumulant expansion is used to derive accurate closed-form approximation for Monthly Sum Options in case of constant volatility model. Payoff of Monthly Sum Option is based on sum of N caped (and probably floored) returns. It is noticed, that 1/\sqrt{N .
[ { "type": "R", "before": "This paper approaches the problem of computing the price", "after": "Cumulant expansion is used to derive accurate closed-form approximation for Monthly Sum Options in case of constant volatility model. Payoff", "start_char_pos": 0, "end_char_pos": 56 }, { "type": "R", "before": "derivative in a framework of Cumulant Expansion", "after": "Option is based on sum of N caped (and probably floored) returns. It is noticed, that 1/\\sqrt{N", "start_char_pos": 72, "end_char_pos": 119 } ]
[ 0 ]
1011.4840
1
It is shown that a chaotic (deterministic) order, rather than a stochastic randomness, controls the energy minima positions of the staking interactions in the DNA sequences . The Kaplan-Yorke map and the Arabidopsis genome have been considered as relevant simple examples. The chaotic order results in a large-scale chaotic coherence between the two complimentary DNA-duplex's sequences. A competition between this broad-band chaotic coherence and the resonance coherence produced by genetic code has been briefly discussed .
Different numerical mappings of the DNA sequences have been studied using a new cluster-scaling method and the well known spectral methods. It is shown , in particular, that the nucleotide sequences in DNA molecules have robust cluster-scaling properties. These properties are relevant to both types of nucleotide pair-bases interactions: hydrogen bonds and stacking interactions. It is shown that taking into account the cluster-scaling properties can help to improve heterogeneous models of the DNA dynamics. It is also shown that a chaotic (deterministic) order, rather than a stochastic randomness, controls the energy minima positions of the stacking interactions in the DNA sequences on large scales. The chaotic order results in a large-scale chaotic coherence between the two complimentary DNA-duplex's sequences. A competition between this broad-band chaotic coherence and the resonance coherence produced by genetic code has been briefly discussed . The Arabidopsis plant genome (which is a model plant for genome analysis) and two human genes: BRCA2 and NRXN1, have been considered as examples .
[ { "type": "A", "before": null, "after": "Different numerical mappings of the DNA sequences have been studied using a new cluster-scaling method and the well known spectral methods.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "that", "after": ", in particular, that the nucleotide sequences in DNA molecules have robust cluster-scaling properties. These properties are relevant to both types of nucleotide pair-bases interactions: hydrogen bonds and stacking interactions. It is shown that taking into account the cluster-scaling properties can help to improve heterogeneous models of the DNA dynamics. It is also shown that", "start_char_pos": 13, "end_char_pos": 17 }, { "type": "R", "before": "staking", "after": "stacking", "start_char_pos": 132, "end_char_pos": 139 }, { "type": "R", "before": ". The Kaplan-Yorke map and the Arabidopsis genome have been considered as relevant simple examples. The", "after": "on large scales. The", "start_char_pos": 174, "end_char_pos": 277 }, { "type": "A", "before": null, "after": ". The Arabidopsis plant genome (which is a model plant for genome analysis) and two human genes: BRCA2 and NRXN1, have been considered as examples", "start_char_pos": 525, "end_char_pos": 525 } ]
[ 0, 175, 273, 388 ]
1011.4976
1
There are many different mathematical setups to model the dynamics of gene regulatory networks (GRNs), most based on Boolean networks or chemical kinetics ideas. Here we present an approach to model GRNs through a class of irreversible interacting particle systems called type-dependent stochastic spin models that depart from these traditions. The approach allows for a straightforward modeling of biochemical pathways at the level of promotion--inhibition circuitry, and a particular but otherwise generally applicable choice for the microscopic transition rates of the models also makes them of independent interest. To illustrate the formalism, we investigate some stationary state properties of the repressilator, a synthetic three-gene network of transcriptional regulators that possesses a rich dynamical behavior .
We describe an approach to model biochemical reaction networks at the level of promotion-inhibition circuitry through a class of stochastic spin models that depart from the usual chemical kinetics setup and includes spatial and temporal density fluctuations in a most natural way. A particular but otherwise generally applicable choice for the microscopic transition rates of the models also makes them of independent interest. To illustrate the formalism, we investigate some stationary state properties of the repressilator, a synthetic three-gene network of transcriptional regulators that possesses a rich dynamical behaviour .
[ { "type": "R", "before": "There are many different mathematical setups to model the dynamics of gene regulatory networks (GRNs), most based on Boolean networks or chemical kinetics ideas. Here we present", "after": "We describe", "start_char_pos": 0, "end_char_pos": 177 }, { "type": "R", "before": "GRNs", "after": "biochemical reaction networks at the level of promotion-inhibition circuitry", "start_char_pos": 199, "end_char_pos": 203 }, { "type": "D", "before": "irreversible interacting particle systems called type-dependent", "after": null, "start_char_pos": 223, "end_char_pos": 286 }, { "type": "R", "before": "these traditions. The approach allows for a straightforward modeling of biochemical pathways at the level of promotion--inhibition circuitry, and a", "after": "the usual chemical kinetics setup and includes spatial and temporal density fluctuations in a most natural way. A", "start_char_pos": 327, "end_char_pos": 474 }, { "type": "R", "before": "behavior", "after": "behaviour", "start_char_pos": 812, "end_char_pos": 820 } ]
[ 0, 161, 344, 619 ]
1011.4976
2
We describe an approach to model biochemical reaction networks at the level of promotion-inhibition circuitry through a class of stochastic spin models that depart from the usual chemical kinetics setup and includes spatial and temporal density fluctuations in a most natural way. A particular but otherwise generally applicable choice for the microscopic transition rates of the models also makes them of independent interest. To illustrate the formalism, we investigate some stationary state properties of the repressilator, a synthetic three-gene network of transcriptional regulators that possesses a rich dynamical behaviour .
We describe an approach to model genetic regulatory networks at the level of promotion-inhibition circuitry through a class of stochastic spin models that includes spatial and temporal density fluctuations in a natural way. The formalism can be viewed as an agent-based model formalism with agent behavior ruled by a classical spin-like pseudo-Hamiltonian playing the role of a local, individual objective function. A particular but otherwise generally applicable choice for the microscopic transition rates of the models also makes them of independent interest. To illustrate the formalism, we investigate (by Monte Carlo simulations) some stationary state properties of the repressilator, a synthetic three-gene network of transcriptional regulators that possesses oscillatory behavior .
[ { "type": "R", "before": "biochemical reaction", "after": "genetic regulatory", "start_char_pos": 33, "end_char_pos": 53 }, { "type": "D", "before": "depart from the usual chemical kinetics setup and", "after": null, "start_char_pos": 157, "end_char_pos": 206 }, { "type": "D", "before": "most", "after": null, "start_char_pos": 263, "end_char_pos": 267 }, { "type": "A", "before": null, "after": "The formalism can be viewed as an agent-based model formalism with agent behavior ruled by a classical spin-like pseudo-Hamiltonian playing the role of a local, individual objective function.", "start_char_pos": 281, "end_char_pos": 281 }, { "type": "A", "before": null, "after": "(by Monte Carlo simulations)", "start_char_pos": 473, "end_char_pos": 473 }, { "type": "R", "before": "a rich dynamical behaviour", "after": "oscillatory behavior", "start_char_pos": 605, "end_char_pos": 631 } ]
[ 0, 280, 428 ]
1011.5317
1
We consider a multi-channel wireless network with user-level CSMA, which consists in running one instance of the CSMA algorithm per active user at each MAC interface . Specifically, each such instance attempts to access a randomly chosen radio channel after some random time and transmits a packet of the corresponding user if the channel is sensed idle . We prove that , unlike the standard CSMA algorithm, this simple distributed access scheme is optimal in the sense that the network is stable for all traffic intensities in the capacity region of the network. Simulations show that capacity (in terms of maximum sustainable traffic) increases by a factor ranging from 1.5 to 2.5 compared to the standard CSMAalgorithm, depending on the network topology and the number of available radio channels .
We analyze the performance of CSMA in multi-channel wireless networks, accounting for the random nature of traffic . Specifically, we assess the ability of CSMA to fully utilize the radio resources and in turn to stabilize the network in a dynamic setting with flow arrivals and departures . We prove that CSMA is optimal in ad-hoc mode but not in infrastructure mode, when all data flows originate from or are destined to some access points, due to the inherent bias of CSMA against downlink traffic. We propose a slight modification of CSMA, that we refer to as flow-aware CSMA, which corrects this bias and makes the algorithm optimal in all cases. The analysis is based on some time-scale separation assumption which is proved valid in the limit of large flow sizes .
[ { "type": "R", "before": "consider a", "after": "analyze the performance of CSMA in", "start_char_pos": 3, "end_char_pos": 13 }, { "type": "R", "before": "network with user-level CSMA, which consists in running one instance of the CSMA algorithm per active user at each MAC interface", "after": "networks, accounting for the random nature of traffic", "start_char_pos": 37, "end_char_pos": 165 }, { "type": "R", "before": "each such instance attempts to access a randomly chosen radio channel after some random time and transmits a packet of the corresponding user if the channel is sensed idle", "after": "we assess the ability of CSMA to fully utilize the radio resources and in turn to stabilize the network in a dynamic setting with flow arrivals and departures", "start_char_pos": 182, "end_char_pos": 353 }, { "type": "R", "before": ", unlike the standard CSMA algorithm, this simple distributed access scheme", "after": "CSMA", "start_char_pos": 370, "end_char_pos": 445 }, { "type": "R", "before": "the sense that the network is stable for all traffic intensities in the capacity region of the network. Simulations show that capacity (in terms of maximum sustainable traffic) increases by a factor ranging from 1.5 to 2.5 compared to the standard CSMAalgorithm, depending on the network topology and the number of available radio channels", "after": "ad-hoc mode but not in infrastructure mode, when all data flows originate from or are destined to some access points, due to the inherent bias of CSMA against downlink traffic. We propose a slight modification of CSMA, that we refer to as flow-aware CSMA, which corrects this bias and makes the algorithm optimal in all cases. The analysis is based on some time-scale separation assumption which is proved valid in the limit of large flow sizes", "start_char_pos": 460, "end_char_pos": 799 } ]
[ 0, 355, 563 ]
1011.5431
1
In this work we study the thermodynamic and dynamic characteristics of an enzymatic reaction at the single molecular level . We investigate how the stability of the enzyme-state stationary probability distribution, the reaction velocity, and its efficiency of energy conversion depend on the system parameters. We employ in this study a recently introduced formalism for performing a multiscale thermodynamic analysis in continuous-time discrete-state stochastic systems.
In this work we study , at the single molecular level, the thermodynamic and dynamic characteristics of an enzymatic reaction comprising a rate limiting step . We investigate how the stability of the enzyme-state stationary probability distribution, the reaction velocity, and its efficiency of energy conversion depend on the system parameters. We employ in this study a recently introduced formalism for performing a multiscale thermodynamic analysis in continuous-time discrete-state stochastic systems.
[ { "type": "R", "before": "the", "after": ", at the single molecular level, the", "start_char_pos": 22, "end_char_pos": 25 }, { "type": "R", "before": "at the single molecular level", "after": "comprising a rate limiting step", "start_char_pos": 93, "end_char_pos": 122 } ]
[ 0, 124, 310 ]
1011.5650
1
The aim of this paper is to show how option prices in the Jump-diffusion model can be computed using meshless methods based on Radial Basis Function (RBF) interpolation. The RBF technique is demonstrated by solving the partial integro-differential equation (PIDE) in one-dimension for the American put and the European vanilla call/put options on dividend-paying stocks in the Merton and Kou Jump-diffusion models. The radial basis function we select is the Cubic Spline. We also propose a simple numerical algorithm for finding a finite computational range of a global integral term in the PIDE so that the accuracy of approximation of the integral can be improved. Moreover, the solution functions of the PIDE are approximated explicitly by RBFs which have exact forms so we can easily compute the global intergal by any kind of numerical quadrature. Finally, we will also show numerically that our scheme is second order accurate in spatial variables in both American and European cases .
The aim of this chapter is to show how option prices in jump-diffusion models can be computed using meshless methods based on Radial Basis Function (RBF) interpolation. The RBF technique is demonstrated by solving the partial integro-differential equation (PIDE) in one-dimension for the American put and the European vanilla call/put options on dividend-paying stocks in the Merton and Kou jump-diffusion models. The radial basis function we select is the Cubic Spline. We also propose a simple numerical algorithm for finding a finite computational range of an improper integral term in the PIDE so that the accuracy of approximation of the integral can be improved. Moreover, the solution functions of the PIDE are approximated explicitly by RBFs which have exact forms so we can easily compute the global integral by any kind of numerical quadrature. Finally, we will not only show numerically that our scheme is second order accurate in both spatial and time variables in a European case but also second order accurate in spatial variables and first order accurate in time variables in an American case .
[ { "type": "R", "before": "paper", "after": "chapter", "start_char_pos": 16, "end_char_pos": 21 }, { "type": "R", "before": "the Jump-diffusion model", "after": "jump-diffusion models", "start_char_pos": 54, "end_char_pos": 78 }, { "type": "R", "before": "Jump-diffusion", "after": "jump-diffusion", "start_char_pos": 392, "end_char_pos": 406 }, { "type": "R", "before": "a global", "after": "an improper", "start_char_pos": 561, "end_char_pos": 569 }, { "type": "R", "before": "intergal", "after": "integral", "start_char_pos": 807, "end_char_pos": 815 }, { "type": "R", "before": "also", "after": "not only", "start_char_pos": 870, "end_char_pos": 874 }, { "type": "R", "before": "spatial variables in both American and European cases", "after": "both spatial and time variables in a European case but also second order accurate in spatial variables and first order accurate in time variables in an American case", "start_char_pos": 936, "end_char_pos": 989 } ]
[ 0, 169, 414, 471, 666, 852 ]
1011.5705
1
This chapter derives the properties of light from the properties of processing, including its ability to be both a wave and a particle, to detect objects it doesn't touch, to choose a route after it arrives, to take all paths to a destination and to spin in any direction both ways at once . In this model, a photon is a processing wave front instantiated from an entity program class, and quantum collapse is when one class stops to merge with another, which restarts its processing . This processing approach to physics also gives insights into entanglement, superposition, counterfactuals, the holographic principle and the measurement problem. Its conceptual cost is that physical reality becomes a processing output .
This chapter derives the properties of light from the properties of processing, including its ability to be both a wave and a particle, to respond to objects it doesn't physically touch, to take all paths to a destination, to choose a route after it arrives, and to spin both ways at once as it moves. Here a photon is an entity program spreading as a processing wave of instances. It becomes a "particle" if any part of it overloads the grid network that runs it, causing the photon program to reboot and restart at a new node. The "collapse of the wave function" is how quantum processing creates what we call a physical photon. This informational approach gives insights into issues like the law of least action, entanglement, superposition, counterfactuals, the holographic principle and the measurement problem. The conceptual cost is that physical reality is a quantum processing output, i.e. virtual .
[ { "type": "R", "before": "detect", "after": "respond to", "start_char_pos": 139, "end_char_pos": 145 }, { "type": "A", "before": null, "after": "physically", "start_char_pos": 165, "end_char_pos": 165 }, { "type": "A", "before": null, "after": "take all paths to a destination, to", "start_char_pos": 176, "end_char_pos": 176 }, { "type": "D", "before": "to take all paths to a destination", "after": null, "start_char_pos": 210, "end_char_pos": 244 }, { "type": "D", "before": "in any direction", "after": null, "start_char_pos": 257, "end_char_pos": 273 }, { "type": "R", "before": ". In this model,", "after": "as it moves. Here", "start_char_pos": 292, "end_char_pos": 308 }, { "type": "D", "before": "a processing wave front instantiated from", "after": null, "start_char_pos": 321, "end_char_pos": 362 }, { "type": "R", "before": "class, and quantum collapse is when one class stops to merge with another, which restarts its processing . This processing approach to physics also", "after": "spreading as a processing wave of instances. It becomes a \"particle\" if any part of it overloads the grid network that runs it, causing the photon program to reboot and restart at a new node. The \"collapse of the wave function\" is how quantum processing creates what we call a physical photon. This informational approach", "start_char_pos": 381, "end_char_pos": 528 }, { "type": "A", "before": null, "after": "issues like the law of least action,", "start_char_pos": 549, "end_char_pos": 549 }, { "type": "R", "before": "Its", "after": "The", "start_char_pos": 651, "end_char_pos": 654 }, { "type": "R", "before": "becomes a processing output", "after": "is a quantum processing output, i.e. virtual", "start_char_pos": 696, "end_char_pos": 723 } ]
[ 0, 293, 487, 650 ]
1012.0249
1
According to the Loss Distribution Approach, the operational risk of a bank is determined as 99.9\% quantile of the respective loss distribution, covering unexpected severe events. The 99.9\% quantile is a tail event. Supported by the Pickands-Balkema-de Haan Theorem, tail events exceeding some high threshold are usually modeled by a Generalized Pareto Distribution (GPD). However, because of the heavy-tailedness of this distribution, estimation of its tail quantiles is not a trivial task, which becomes even more difficult when there are outliers in the data, or data is pooled among several sources. In such situations where the origin and representativeness of the available data is not clear , robust methods provide a remedy which can provide reliable estimates when classical methods already fail. We illustrate this , applying such robust methods for parameter estimation of a GPD - including some recently developed methods - to data from Algorithmics Inc. To better understand these results, we provide some useful diagnostic plots adjusted for this context: influence plot, outlyingness plot , and QQ plot with robust confidence bands.
According to the Loss Distribution Approach, the operational risk of a bank is determined as 99.9\% quantile of the respective loss distribution, covering unexpected severe events. The 99.9\% quantile can be considered a tail event. As supported by the Pickands-Balkema-de Haan Theorem, tail events exceeding some high threshold are usually modeled by a Generalized Pareto Distribution (GPD). Estimation of GPD tail quantiles is not a trivial task, in particular if one takes into account the heavy tails of this distribution, the possibility of singular outliers, and, moreover, the fact that data is usually pooled among several sources. Moreover, if, as is frequently the case, operational losses are pooled anonymously, relevance of the fitting data for the respective bank is not self-evident. In such situations , robust methods may provide stable estimates when classical methods already fail. In this paper, optimally-robust procedures MBRE, OMSE, RMXE are introduced to the application domain of operational risk. We apply these procedures to parameter estimation of a GPD at data from Algorithmics Inc. To better understand these results, we provide supportive diagnostic plots adjusted for this context: influence plots, outlyingness plots , and QQ plots with robust confidence bands.
[ { "type": "R", "before": "is", "after": "can be considered", "start_char_pos": 201, "end_char_pos": 203 }, { "type": "R", "before": "Supported", "after": "As supported", "start_char_pos": 218, "end_char_pos": 227 }, { "type": "R", "before": "However, because of the heavy-tailedness of this distribution, estimation of its", "after": "Estimation of GPD", "start_char_pos": 375, "end_char_pos": 455 }, { "type": "R", "before": "which becomes even more difficult when there are outliers in the data, or data is", "after": "in particular if one takes into account the heavy tails of this distribution, the possibility of singular outliers, and, moreover, the fact that data is usually", "start_char_pos": 494, "end_char_pos": 575 }, { "type": "R", "before": "In such situations where the origin and representativeness of the available data is not clear", "after": "Moreover, if, as is frequently the case, operational losses are pooled anonymously, relevance of the fitting data for the respective bank is not self-evident. In such situations", "start_char_pos": 606, "end_char_pos": 699 }, { "type": "R", "before": "provide a remedy which can provide reliable", "after": "may provide stable", "start_char_pos": 717, "end_char_pos": 760 }, { "type": "R", "before": "We illustrate this , applying such robust methods for", "after": "In this paper, optimally-robust procedures MBRE, OMSE, RMXE are introduced to the application domain of operational risk. We apply these procedures to", "start_char_pos": 808, "end_char_pos": 861 }, { "type": "R", "before": "- including some recently developed methods - to", "after": "at", "start_char_pos": 892, "end_char_pos": 940 }, { "type": "R", "before": "some useful", "after": "supportive", "start_char_pos": 1016, "end_char_pos": 1027 }, { "type": "R", "before": "plot, outlyingness plot", "after": "plots, outlyingness plots", "start_char_pos": 1082, "end_char_pos": 1105 }, { "type": "R", "before": "plot", "after": "plots", "start_char_pos": 1115, "end_char_pos": 1119 } ]
[ 0, 180, 217, 374, 605, 807, 968 ]
1012.1412
1
The paper introduces options where the holder to select certain strategies that control the payoff. These control processes are assumed to be adapted to the current flow of information. These options have potential applications for commodities and energy trading. For instance, a control process can represent the quantity of some commodity that can be purchased by a certain given price at current time. In another example, the control represents the weight of the integral in a modification of the Asian option. It is shown that pricing for these options requires solution of an stochastic control problem. Some pricing rules are suggested.
The paper introduces a modification of the passport options such that the holder selects select dynamically a weight function that control the distribution of the payments (benefits) for option holder over time. The control processes are assumed to be adapted to the current flow of information. These options are oriented on applications for commodities and energy trading. For instance, a control process can represent the quantity of some commodity that can be purchased by a certain given price at current time. In another example, the control represents the weight of the integral in a modification of the Asian option. It is shown that pricing for these options requires solution of a stochastic control problem. Some pricing rules are suggested.
[ { "type": "R", "before": "options where the holder to select certain strategies", "after": "a modification of the passport options such that the holder selects select dynamically a weight function", "start_char_pos": 21, "end_char_pos": 74 }, { "type": "R", "before": "payoff. These", "after": "distribution of the payments (benefits) for option holder over time. The", "start_char_pos": 92, "end_char_pos": 105 }, { "type": "R", "before": "have potential", "after": "are oriented on", "start_char_pos": 200, "end_char_pos": 214 }, { "type": "R", "before": "an", "after": "a", "start_char_pos": 578, "end_char_pos": 580 } ]
[ 0, 99, 185, 263, 404, 513, 608 ]
1012.1412
2
The paper introduces a modification of the passport options such that the holder selects select dynamically a weight function that control the distribution of the payments (benefits) for option holder over time. The control processes are assumed to be adapted to the current flow of information. These options are oriented on applications for commodities and energy trading . For instance , a control process can represent the quantity of some commodity that can be purchased by a certain given price at current time. In another example, the control represents the weight of the integral in a modification of the Asian option. It is shown that pricing for these options requires solution of a stochastic control problem. Some pricing rules are suggested .
The paper introduces a limit version of multiple stopping options such that the holder selects dynamically a weight function that control the distribution of the payments (benefits) over time. In applications for commodities and energy trading , a control process can represent the quantity that can be purchased by a fixed price at current time. In another example, the control represents the weight of the integral in a modification of the Asian option. The pricing for these options requires to solve a stochastic control problem. Some existence results and pricing rules are obtained via modifications of parabolic Bellman equations .
[ { "type": "R", "before": "modification of the passport", "after": "limit version of multiple stopping", "start_char_pos": 23, "end_char_pos": 51 }, { "type": "D", "before": "select", "after": null, "start_char_pos": 89, "end_char_pos": 95 }, { "type": "D", "before": "for option holder", "after": null, "start_char_pos": 183, "end_char_pos": 200 }, { "type": "R", "before": "The control processes are assumed to be adapted to the current flow of information. These options are oriented on", "after": "In", "start_char_pos": 212, "end_char_pos": 325 }, { "type": "D", "before": ". For instance", "after": null, "start_char_pos": 374, "end_char_pos": 388 }, { "type": "D", "before": "of some commodity", "after": null, "start_char_pos": 436, "end_char_pos": 453 }, { "type": "R", "before": "certain given", "after": "fixed", "start_char_pos": 481, "end_char_pos": 494 }, { "type": "R", "before": "It is shown that", "after": "The", "start_char_pos": 627, "end_char_pos": 643 }, { "type": "R", "before": "solution of", "after": "to solve", "start_char_pos": 679, "end_char_pos": 690 }, { "type": "A", "before": null, "after": "existence results and", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "R", "before": "suggested", "after": "obtained via modifications of parabolic Bellman equations", "start_char_pos": 745, "end_char_pos": 754 } ]
[ 0, 211, 295, 375, 517, 626, 720 ]
1012.1473
1
A network observed in a particular context may appear to have "unusual" properties. To quantify this, it is appropriate to randomize the network and test the hypothesis that the network is not statistically different from expected in a motivated ensemble. However, when dealing with metabolic networks, the straightforward randomization of the network generates fictitious reactions that are biochemically meaningless. Here we provide several natural ensembles for randomizing such metabolic networks. A first constraint is to use valid biochemical reactions. Further constraints correspond to imposing appropriate functional constraints. We explain how to perform these randomizations and show how they allow one to approach the properties of biological metabolic networks. An implication of the present work is that the observed global structural properties of real metabolic networks are likely to be the consequence of simple biochemical and functional constraints.
Networks coming from protein-protein interactions, transcriptional regulation, signaling, or metabolism may appear to have "unusual" properties. To quantify this, it is appropriate to randomize the network and test the hypothesis that the network is not statistically different from expected in a motivated ensemble. However, when dealing with metabolic networks, the randomization of the network using edge exchange generates fictitious reactions that are biochemically meaningless. Here we provide several natural ensembles of randomized metabolic networks. A first constraint is to use valid biochemical reactions. Further constraints correspond to imposing appropriate functional constraints. We explain how to perform these randomizations with the help of Markov Chain Monte Carlo (MCMC) and show that they allow one to approach the properties of biological metabolic networks. The implication of the present work is that the observed global structural properties of real metabolic networks are likely to be the consequence of simple biochemical and functional constraints.
[ { "type": "R", "before": "A network observed in a particular context", "after": "Networks coming from protein-protein interactions, transcriptional regulation, signaling, or metabolism", "start_char_pos": 0, "end_char_pos": 42 }, { "type": "D", "before": "straightforward", "after": null, "start_char_pos": 307, "end_char_pos": 322 }, { "type": "A", "before": null, "after": "using edge exchange", "start_char_pos": 352, "end_char_pos": 352 }, { "type": "R", "before": "for randomizing such", "after": "of randomized", "start_char_pos": 462, "end_char_pos": 482 }, { "type": "R", "before": "and show how", "after": "with the help of Markov Chain Monte Carlo (MCMC) and show that", "start_char_pos": 687, "end_char_pos": 699 }, { "type": "R", "before": "An", "after": "The", "start_char_pos": 776, "end_char_pos": 778 } ]
[ 0, 83, 255, 419, 502, 560, 639, 775 ]
1012.1793
1
In the "positive interest" models of Flesaker and Hughston , the nominal discount bond system is determined by the specification of a one-parameter family of positive martingales. In the present paper we extend this analysis to include a variety of distributions for the martingale family, parameterised by a function that determines the behaviour of the market risk premium. These distributions include jump and diffusion characteristics that generate various interesting properties for discount bond returns. For example, one can generate skewness and excess kurtosis in the bond returns by choosing the martingale family to be given by (a) exponential gamma processes, or (b) exponential variance gamma processes. The models are "rational" in the sense that the discount bond price is given by the ratio of a pair of weighted sums of positive martingales. Our findings lead to semi-analytical formulae for the prices of European options on discount bonds .
In the "positive interest" models of Flesaker-Hughston , the nominal discount bond system is determined by a one-parameter family of positive martingales. In the present paper we extend this analysis to include a variety of distributions for the martingale family, parameterised by a function that determines the behaviour of the market risk premium. These distributions include jump and diffusion characteristics that generate various properties for discount bond returns. For example, one can generate skewness and excess kurtosis in the bond returns by choosing the martingale family to be given by (a) exponential gamma processes, or (b) exponential variance gamma processes. The models are "rational" in the sense that the discount bond price is given by a ratio of weighted sums of positive martingales. Our findings lead to semi-analytical formulae for the prices of options on discount bonds . A number of general results concerning L\'evy interest rate models are presented as well .
[ { "type": "R", "before": "Flesaker and Hughston", "after": "Flesaker-Hughston", "start_char_pos": 37, "end_char_pos": 58 }, { "type": "D", "before": "the specification of", "after": null, "start_char_pos": 111, "end_char_pos": 131 }, { "type": "D", "before": "interesting", "after": null, "start_char_pos": 461, "end_char_pos": 472 }, { "type": "R", "before": "the ratio of a pair of", "after": "a ratio of", "start_char_pos": 797, "end_char_pos": 819 }, { "type": "D", "before": "European", "after": null, "start_char_pos": 923, "end_char_pos": 931 }, { "type": "A", "before": null, "after": ". A number of general results concerning L\\'evy interest rate models are presented as well", "start_char_pos": 958, "end_char_pos": 958 } ]
[ 0, 179, 375, 510, 716, 858 ]
1012.2504
1
To date , there is little structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins , although many experimental studies have shown that this region has amyloid fibril forming properties . This region belongs to the N-terminal unstructured region (1-123) of prions, the structure of which has proved hard to determine using NMR or X-ray crystallography. Computational optimization approaches , however, allow us to obtain a description of prion 113-120 peptide at a microscopic level. Zhang (J. Mol. Model., 2010, DOI: 10.1007/s00894-010-0691-y) using the traditional local optimization search steepest descent and conjugate gradient methods hybridized with the standard global optimization search simulated annealingmethod successfully constructed three atomic-resolution structures of prion AGAAAAGA amyloid fibrils. Zhang pointed out, basing on the NNQNTF peptide of elk prion 173--178 (3FVA.pdb released on 30-JUN-2009 in the Protein Data Bank), new models for prion AGAAAAGA amyloid fibrils might also be able to be constructed. In this paper , using the hybrid local and global optimization search methods, the author successfully constructs another two optimal atomic-resolution amyloid fibril models for the prion AGAAAAGA palindrome. According to Brown (Mol. Cell. Neurosci., 2000, Vol. 15, pp. 66--78), AGAAAAGA is an inhibitor of infectious prions. These atomic-resolution structures of the AGAAAAGA amyloid fibrils constructed might be useful in furthering the goals of medicinal chemistry for controlling prion diseases.
X-ray crystallography is a powerful tool to determine the protein 3D structure. However, it is time-consuming and expensive, and not all proteins can be successfully crystallized, particularly for membrane proteins. Although nuclear magnetic resonance (NMR) spectroscopy is indeed a very powerful tool in determining the 3D structures of membrane proteins, it is also time-consuming and costly. To the best of the authors' knowledge , there is little structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins due to the noncrystalline and insoluble nature of the amyloid fibril , although many experimental studies have shown that this region has amyloid fibril forming properties and plays an important role in prion diseases. In view of this, the present study is devoted to address this problem from computational approaches such as global energy optimization, simulated annealing, and structural bioinformatics. The optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils reported in this paper have a value to the scientific community in its drive to find treatments for prion diseases.
[ { "type": "R", "before": "To date", "after": "X-ray crystallography is a powerful tool to determine the protein 3D structure. However, it is time-consuming and expensive, and not all proteins can be successfully crystallized, particularly for membrane proteins. Although nuclear magnetic resonance (NMR) spectroscopy is indeed a very powerful tool in determining the 3D structures of membrane proteins, it is also time-consuming and costly. To the best of the authors' knowledge", "start_char_pos": 0, "end_char_pos": 7 }, { "type": "A", "before": null, "after": "due to the noncrystalline and insoluble nature of the amyloid fibril", "start_char_pos": 133, "end_char_pos": 133 }, { "type": "R", "before": ". This region belongs to the N-terminal unstructured region (1-123) of prions, the structure of which has proved hard to determine using NMR or X-ray crystallography. Computational optimization approaches , however, allow us to obtain a description of prion 113-120 peptide at a microscopic level. Zhang (J. Mol. Model., 2010, DOI: 10.1007/s00894-010-0691-y) using the traditional local optimization search steepest descent and conjugate gradient methods hybridized with the standard global optimization search simulated annealingmethod successfully constructed three", "after": "and plays an important role in prion diseases. In view of this, the present study is devoted to address this problem from computational approaches such as global energy optimization, simulated annealing, and structural bioinformatics. The optimal", "start_char_pos": 237, "end_char_pos": 804 }, { "type": "R", "before": "fibrils. Zhang pointed out, basing on the NNQNTF peptide of elk prion 173--178 (3FVA.pdb released on 30-JUN-2009 in the Protein Data Bank), new models for prion AGAAAAGA amyloid fibrils might also be able to be constructed. In this paper , using the hybrid local and global optimization search methods, the author successfully constructs another two optimal atomic-resolution amyloid fibril models for the prion AGAAAAGA palindrome. According to Brown (Mol. Cell. Neurosci., 2000, Vol. 15, pp. 66--78), AGAAAAGA is an inhibitor of infectious prions. These atomic-resolution structures of the AGAAAAGA amyloid fibrils constructed might be useful in furthering the goals of medicinal chemistry for controlling", "after": "fibils reported in this paper have a value to the scientific community in its drive to find treatments for", "start_char_pos": 860, "end_char_pos": 1567 } ]
[ 0, 238, 403, 534, 868, 1083, 1292, 1323, 1409 ]
1012.3102
1
This paper consists of two parts. In the first part , by building on the work of Jouini and Kallal in 26%DIFDELCMD < ]%%% , Sch\"urger in 37%DIFDELCMD < ]%%% , Frittelli in 15%DIFDELCMD < ]%%% , Pham and Touzi in 34%DIFDELCMD < ] %%% and Napp in 33%DIFDELCMD < ]%%% , we prove the Fundamental Theorem of Asset Pricing under short sales prohibitions in continuous-time financial models where asset prices are driven by nonnegative locally bounded semimartingales. A key result in this generalization is an extension of a well known result of Ansel and Stricker in 1%DIFDELCMD < ]%%% . Additionally, and motivated by the works of F\"ollmer and Kramkov in 13%DIFDELCMD < ] %%% and Delbaen and Schachermayer in 9%DIFDELCMD < ]%%% , we study the hedging problem in these models and connect it to a properly defined property of "maximality" of contingent claims.
This paper consists of two parts. In the first part %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% %DIFDELCMD < ] %%% %DIFDELCMD < ]%%% we prove the Fundamental Theorem of Asset Pricing under short sales prohibitions in continuous-time financial models where asset prices are driven by nonnegative locally bounded semimartingales. A key step in this proof is an extension of a well known result of Ansel and Stricker %DIFDELCMD < ]%%% %DIFDELCMD < ] %%% %DIFDELCMD < ]%%% . In the second part we study the hedging problem in these models and connect it to a properly defined property of "maximality" of contingent claims.
[ { "type": "D", "before": ", by building on the work of Jouini and Kallal in", "after": null, "start_char_pos": 52, "end_char_pos": 101 }, { "type": "D", "before": "26", "after": null, "start_char_pos": 102, "end_char_pos": 104 }, { "type": "D", "before": ", Sch\\\"urger in", "after": null, "start_char_pos": 122, "end_char_pos": 137 }, { "type": "D", "before": "37", "after": null, "start_char_pos": 138, "end_char_pos": 140 }, { "type": "D", "before": ", Frittelli in", "after": null, "start_char_pos": 158, "end_char_pos": 172 }, { "type": "D", "before": "15", "after": null, "start_char_pos": 173, "end_char_pos": 175 }, { "type": "D", "before": ", Pham and Touzi in", "after": null, "start_char_pos": 193, "end_char_pos": 212 }, { "type": "D", "before": "34", "after": null, "start_char_pos": 213, "end_char_pos": 215 }, { "type": "D", "before": "and Napp in", "after": null, "start_char_pos": 234, "end_char_pos": 245 }, { "type": "D", "before": "33", "after": null, "start_char_pos": 246, "end_char_pos": 248 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 266, "end_char_pos": 267 }, { "type": "R", "before": "result in this generalization", "after": "step in this proof", "start_char_pos": 469, "end_char_pos": 498 }, { "type": "D", "before": "in", "after": null, "start_char_pos": 560, "end_char_pos": 562 }, { "type": "D", "before": "1", "after": null, "start_char_pos": 563, "end_char_pos": 564 }, { "type": "D", "before": ". Additionally, and motivated by the works of F\\\"ollmer and Kramkov in", "after": null, "start_char_pos": 582, "end_char_pos": 652 }, { "type": "D", "before": "13", "after": null, "start_char_pos": 653, "end_char_pos": 655 }, { "type": "D", "before": "and Delbaen and Schachermayer in", "after": null, "start_char_pos": 674, "end_char_pos": 706 }, { "type": "D", "before": "9", "after": null, "start_char_pos": 707, "end_char_pos": 708 }, { "type": "R", "before": ",", "after": ". In the second part", "start_char_pos": 726, "end_char_pos": 727 } ]
[ 0, 33, 462 ]
1012.3102
2
This paper consists of two parts. In the first part we prove the Fundamental Theorem of Asset Pricing under short sales prohibitions in continuous-time financial models where asset prices are driven by nonnegative locally bounded semimartingales. A key step in this proof is an extension of a well known result of Ansel and Stricker. In the second part we study the hedging problem in these models and connect it to a properly defined property of "maximality" of contingent claims.
This paper consists of two parts. In the first part we prove the fundamental theorem of asset pricing under short sales prohibitions in continuous-time financial models where asset prices are driven by nonnegative , locally bounded semimartingales. A key step in this proof is an extension of a well-known result of Ansel and Stricker. In the second part we study the hedging problem in these models and connect it to a properly defined property of "maximality" of contingent claims.
[ { "type": "R", "before": "Fundamental Theorem of Asset Pricing", "after": "fundamental theorem of asset pricing", "start_char_pos": 65, "end_char_pos": 101 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 214, "end_char_pos": 214 }, { "type": "R", "before": "well known", "after": "well-known", "start_char_pos": 294, "end_char_pos": 304 } ]
[ 0, 33, 247, 334 ]
1012.3234
1
This paper studies the valuation of a class of credit default swaps (CDSs) with the embedded option to switch to a different premium and notional principal anytime prior to a credit event. These are early exercisable contracts that give the protection buyer or seller the right to step-up, step-down, or cancel the CDS position. The pricing problem is formulated under a structural credit risk model based on Levy processes. This leads to the analytic and numerical studies of an optimal stopping problem subject to early termination due to default. In a general spectrally negative Levy model, we rigorously derive an analytic solution for the investor's optimal exercise strategy. This allows for instant computation of the credit spread under various specifications. Numerical examples are provided to examine the impacts of default risk and contractual features on the credit spread and exercise strategy.
This paper studies the valuation of a class of credit default swaps (CDSs) with the embedded option to switch to a different premium and notional principal anytime prior to a credit event. These are early exercisable contracts that give the protection buyer or seller the right to step-up, step-down, or cancel the CDS position. The pricing problem is formulated under a structural credit risk model based on Levy processes. This leads to the analytic and numerical studies of several optimal stopping problems subject to early termination due to default. In a general spectrally negative Levy model, we rigorously derive the optimal exercise strategy. This allows for instant computation of the credit spread under various specifications. Numerical examples are provided to examine the impacts of default risk and contractual features on the credit spread and exercise strategy.
[ { "type": "R", "before": "an optimal stopping problem", "after": "several optimal stopping problems", "start_char_pos": 477, "end_char_pos": 504 }, { "type": "R", "before": "an analytic solution for the investor's", "after": "the", "start_char_pos": 616, "end_char_pos": 655 } ]
[ 0, 188, 328, 424, 549, 682, 769 ]
1012.3234
2
This paper studies the valuation of a class of credit default swaps (CDSs) with the embedded option to switch to a different premium and notional principal anytime prior to a credit event. These are early exercisable contracts that give the protection buyer or seller the right to step-up, step-down, or cancel the CDS position. The pricing problem is formulated under a structural credit risk model based on Levy processes. This leads to the analytic and numerical studies of several optimal stopping problems subject to early termination due to default. In a general spectrally negative Levy model, we rigorously derive the optimal exercise strategy. This allows for instant computation of the credit spread under various specifications. Numerical examples are provided to examine the impacts of default risk and contractual features on the credit spread and exercise strategy.
This paper studies the valuation of a class of default swaps with the embedded option to switch to a different premium and notional principal anytime prior to a credit event. These are early exercisable contracts that give the protection buyer or seller the right to step-up, step-down, or cancel the swap position. The pricing problem is formulated under a structural credit risk model based on Levy processes. This leads to the analytic and numerical studies of several optimal stopping problems subject to early termination due to default. In a general spectrally negative Levy model, we rigorously derive the optimal exercise strategy. This allows for instant computation of the credit spread under various specifications. Numerical examples are provided to examine the impacts of default risk and contractual features on the credit spread and exercise strategy.
[ { "type": "R", "before": "credit default swaps (CDSs)", "after": "default swaps", "start_char_pos": 47, "end_char_pos": 74 }, { "type": "R", "before": "CDS", "after": "swap", "start_char_pos": 315, "end_char_pos": 318 } ]
[ 0, 188, 328, 424, 555, 652, 739 ]
1012.3287
1
Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from the biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We here consider a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with \Theta_0(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity K_c=2.0 with activity vanishing at the critical point .
Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from a biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We consider here a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with \Theta_0(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity K_c=2.0 with activity vanishing at the critical point . Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks .
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 308, "end_char_pos": 311 }, { "type": "R", "before": "here consider", "after": "consider here", "start_char_pos": 547, "end_char_pos": 560 }, { "type": "A", "before": null, "after": ". Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks", "start_char_pos": 963, "end_char_pos": 963 } ]
[ 0, 167, 335, 425, 543, 645, 807 ]
1012.3308
1
We study the role of quantum fluctuations of atomic nuclei in thermally activated, out-of-equilibrium processes . To this goal we introduce an extension of the Dominant Reaction Pathways (DRP) formalism, in which the quantum corrections to the classical Langevin dynamics are rigorously taken into account to order hbar^2 . We apply our method to study the C 7_eq -> C 7_ax transition of alanine dipeptide. We find that quantum fluctuations significantly modify the reaction mechanism with respect to a classical calculation . For example, the energy difference which is overcome along the most probable pathway is reduced by as much as 50\%.
We study the role of quantum fluctuations of atomic nuclei in the real-time dynamics of non-equilibrium macro-molecular transitions . To this goal we introduce an extension of the Dominant Reaction Pathways (DRP) formalism, in which the quantum corrections to the classical overdamped Langevin dynamics are rigorously taken into account to second order in h-bar. We first illustrate our approach with a simple toy model and then we apply it to study the C7_{eq transition of alanine dipeptide. We find that the inclusion of quantum fluctuations can significantly modify the reaction mechanism . For example, the energy difference which is overcome along the most probable pathway is reduced by as much as 50\%.
[ { "type": "R", "before": "thermally activated, out-of-equilibrium processes", "after": "the real-time dynamics of non-equilibrium macro-molecular transitions", "start_char_pos": 62, "end_char_pos": 111 }, { "type": "A", "before": null, "after": "overdamped", "start_char_pos": 254, "end_char_pos": 254 }, { "type": "R", "before": "order hbar^2 . We apply our method", "after": "second order in h-bar. We first illustrate our approach with a simple toy model and then we apply it", "start_char_pos": 310, "end_char_pos": 344 }, { "type": "R", "before": "C 7_eq -> C 7_ax", "after": "C7_{eq", "start_char_pos": 358, "end_char_pos": 374 }, { "type": "R", "before": "quantum fluctuations", "after": "the inclusion of quantum fluctuations can", "start_char_pos": 421, "end_char_pos": 441 }, { "type": "D", "before": "with respect to a classical calculation", "after": null, "start_char_pos": 486, "end_char_pos": 525 } ]
[ 0, 113, 324, 407, 527 ]
1012.3308
2
We study the role of quantum fluctuations of atomic nuclei in the real-time dynamics of non-equilibrium macro-molecular transitions. To this goal we introduce an extension of the Dominant Reaction Pathways (DRP) formalism, in which the quantum corrections to the classical overdamped Langevin dynamics are rigorously taken into account to second order in h-bar . We first illustrate our approach with a simple toy model and then we apply it to study the C7 _{eq to C7 _{ax transition of alanine dipeptide. We find that the inclusion of quantum fluctuations can significantly modify the reaction mechanism . For example, the energy difference which is overcome along the most probable pathway is reduced by as much as 50\%.
We study the role of quantum fluctuations of atomic nuclei in the real-time dynamics of non-equilibrium macro-molecular transitions. To this goal we introduce an extension of the Dominant Reaction Pathways (DRP) formalism, in which the quantum corrections to the classical overdamped Langevin dynamics are rigorously taken into account to order h^2 . We first illustrate our approach in simple cases, and compare with the results of the instanton theory. Then we apply our method to study the C7 _eq to C7 _ax transition of alanine dipeptide. We find that the inclusion of quantum fluctuations can significantly modify the reaction mechanism for peptides . For example, the energy difference which is overcome along the most probable pathway is reduced by as much as 50\%.
[ { "type": "R", "before": "second order in h-bar", "after": "order h^2", "start_char_pos": 339, "end_char_pos": 360 }, { "type": "R", "before": "with a simple toy model and then we apply it", "after": "in simple cases, and compare with the results of the instanton theory. Then we apply our method", "start_char_pos": 396, "end_char_pos": 440 }, { "type": "R", "before": "_{eq", "after": "_eq", "start_char_pos": 457, "end_char_pos": 461 }, { "type": "R", "before": "_{ax", "after": "_ax", "start_char_pos": 468, "end_char_pos": 472 }, { "type": "A", "before": null, "after": "for peptides", "start_char_pos": 605, "end_char_pos": 605 } ]
[ 0, 132, 362, 505, 607 ]
1012.5306
1
Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid Self-Adaptive Evolutionary Strategy \mu+\lambda method and a hybrid Self-Adaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. Lennard-Jones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date , there is little molecular structural data available on the AGAAAAGA palindrome in the hydrophobic region ( 113-120 ) of prion proteins .This region belongs to the N-terminal unstructured region ( 1-123 ) of prion proteins , the structure of which has proved hard to determine using NMR spectroscopy or X-ray crystallography due to the insoluble and noncrystalline nature of the amyloid fibril . However, computer computational optimization can easily obtain a description of this peptide at a submicroscopic level. The hybrid methods presented in this paper will be applied to construct amyloid fibril molecular structures of the prion 113-120 region .
X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy are two powerful tools to determine the protein 3D structure. However, not all proteins can be successfully crystallized, particularly for membrane proteins. Although NMR spectroscopy is indeed very powerful in determining the 3D structures of membrane proteins, same as X-ray crystallography, it is still very time-consuming and expensive. Under many circumstances, due to the noncrystalline and insoluble nature of some proteins, X-ray and NMR cannot be used at all. Computational approaches, however, allow us to obtain a description of the protein 3D structure at a submicroscopic level. To the best of the author's knowledge , there is little structural data available to date on the AGAAAAGA palindrome in the hydrophobic region ( 113--120 ) of prion proteins , which falls just within the N-terminal unstructured region ( 1--123 ) of prion proteins . Many experimental studies have shown that the AGAAAAGA region has amyloid fibril forming properties and plays an important role in prion diseases. However, due to the noncrystalline and insoluble nature of the amyloid fibril , little structural data on the AGAAAAGA is available. This paper introduces a simple optimization strategy approach to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils. Atomic-resolution structures of prion AGAAAAGA amyloid fibrils got in this paper are useful for the drive to find treatments for prion diseases in the field of medicinal chemistry .
[ { "type": "R", "before": "Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid Self-Adaptive Evolutionary Strategy \\mu+\\lambda method and a hybrid Self-Adaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. Lennard-Jones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date", "after": "X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy are two powerful tools to determine the protein 3D structure. However, not all proteins can be successfully crystallized, particularly for membrane proteins. Although NMR spectroscopy is indeed very powerful in determining the 3D structures of membrane proteins, same as X-ray crystallography, it is still very time-consuming and expensive. Under many circumstances, due to the noncrystalline and insoluble nature of some proteins, X-ray and NMR cannot be used at all. Computational approaches, however, allow us to obtain a description of the protein 3D structure at a submicroscopic level. To the best of the author's knowledge", "start_char_pos": 0, "end_char_pos": 702 }, { "type": "D", "before": "molecular", "after": null, "start_char_pos": 721, "end_char_pos": 730 }, { "type": "A", "before": null, "after": "to date", "start_char_pos": 757, "end_char_pos": 757 }, { "type": "R", "before": "113-120", "after": "113--120", "start_char_pos": 813, "end_char_pos": 820 }, { "type": "R", "before": ".This region belongs to", "after": ", which falls just within", "start_char_pos": 841, "end_char_pos": 864 }, { "type": "R", "before": "1-123", "after": "1--123", "start_char_pos": 902, "end_char_pos": 907 }, { "type": "R", "before": ", the structure of which has proved hard to determine using NMR spectroscopy or X-ray crystallography", "after": ". Many experimental studies have shown that the AGAAAAGA region has amyloid fibril forming properties and plays an important role in prion diseases. However,", "start_char_pos": 928, "end_char_pos": 1029 }, { "type": "R", "before": "insoluble and noncrystalline", "after": "noncrystalline and insoluble", "start_char_pos": 1041, "end_char_pos": 1069 }, { "type": "R", "before": ". However, computer computational optimization can easily obtain a description of this peptide at a submicroscopic level. The hybrid methods presented in this paper will be applied to construct amyloid fibril molecular structures of the prion 113-120 region", "after": ", little structural data on the AGAAAAGA is available. This paper introduces a simple optimization strategy approach to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils. Atomic-resolution structures of prion AGAAAAGA amyloid fibrils got in this paper are useful for the drive to find treatments for prion diseases in the field of medicinal chemistry", "start_char_pos": 1099, "end_char_pos": 1356 } ]
[ 0, 128, 357, 511, 624, 694, 842, 1100, 1220 ]
1012.5327
1
We present a novel modulation level classification (MLC) method based on probability distribution distance functions. The proposed method uses modified Kuiper and Kolmogorov- Smirnov (KS) distances to achieve low computational complexity and outperforms the state of the art methods based on cumulants and goodness-of-fit (GoF) tests. We derive the theoretical performance of the proposed MLC method and verify it via simulations. The best classification accuracy under AWGN with SNR mismatch and phase jitter is achieved with the proposed MLC method using Kuiper distances.
We present a novel modulation level classification (MLC) method based on probability distribution distance functions. The proposed method uses modified Kuiper and Kolmogorov-Smirnov distances to achieve low computational complexity and outperforms the state of the art methods based on cumulants and goodness-of-fit tests. We derive the theoretical performance of the proposed MLC method and verify it via simulations. The best classification accuracy , under AWGN with SNR mismatch and phase jitter , is achieved with the proposed MLC method using Kuiper distances.
[ { "type": "R", "before": "Kolmogorov- Smirnov (KS)", "after": "Kolmogorov-Smirnov", "start_char_pos": 163, "end_char_pos": 187 }, { "type": "D", "before": "(GoF)", "after": null, "start_char_pos": 322, "end_char_pos": 327 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 464, "end_char_pos": 464 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 511, "end_char_pos": 511 } ]
[ 0, 117, 334, 430 ]
1101.0073
1
Molecular biology explains function of molecules by their geometric and electronic structures which are mainly determined by utilization of quantum effects in chemistry. However, further quantum effects are not thought to play any significant role in the essential processes of life. On the contrary, consideration of quantum circuits/protocols URLanic molecules as software and hardware of living systems that are co-optimized during evolution, may be useful to pass over the difficulties raised by biochemical complexity and to understand the physics of life. In this sense, we model DNA replication with a reliable qubit representation of the nucleotides: 1) molecular recognition of a nucleotide is assumed to trigger an intrabase entanglement corresponding to a superposition of different tautomer forms and 2) pairing of complementary nucleotides is described by swapping intrabase entanglements with interbase entanglements. We examine possible realizations of quantum circuits/protocols to be used to obtain intrabase and interbase entanglements. Lastly, we discuss feasibility of the computational and experimental verification of the model .
Molecular biology explains function of molecules by their geometrical and electronical structures that are mainly determined by utilization of quantum effects in chemistry. However, further quantum effects are not thought to play any significant role in the essential processes of life. On the contrary, consideration of quantum circuits/protocols URLanic molecules as software and hardware of living systems that are co-optimized during evolution, may be useful to overcome the difficulties raised by biochemical complexity and to understand the physics of life. In this sense, we review quantum information-theoretic approaches to the process of DNA replication and propose a new model in which 1) molecular recognition of a nucleobase is assumed to trigger an intrabase entanglement corresponding to a superposition of different tautomer forms and 2) pairing of complementary nucleobases is described by swapping intrabase entanglements with interbase entanglements. We examine possible biochemical realizations of quantum circuits/protocols to be used to obtain intrabase and interbase entanglements. We deal with the problem of cellular decoherence by using the theory of decoherence-free subspaces and subsystems. Lastly, we discuss feasibility of the computational or experimental verification of the model and future research directions .
[ { "type": "R", "before": "geometric and electronic structures which", "after": "geometrical and electronical structures that", "start_char_pos": 58, "end_char_pos": 99 }, { "type": "R", "before": "pass over", "after": "overcome", "start_char_pos": 463, "end_char_pos": 472 }, { "type": "R", "before": "model DNA replication with a reliable qubit representation of the nucleotides:", "after": "review quantum information-theoretic approaches to the process of DNA replication and propose a new model in which", "start_char_pos": 580, "end_char_pos": 658 }, { "type": "R", "before": "nucleotide", "after": "nucleobase", "start_char_pos": 689, "end_char_pos": 699 }, { "type": "R", "before": "nucleotides", "after": "nucleobases", "start_char_pos": 841, "end_char_pos": 852 }, { "type": "A", "before": null, "after": "biochemical", "start_char_pos": 952, "end_char_pos": 952 }, { "type": "A", "before": null, "after": "We deal with the problem of cellular decoherence by using the theory of decoherence-free subspaces and subsystems.", "start_char_pos": 1056, "end_char_pos": 1056 }, { "type": "R", "before": "and", "after": "or", "start_char_pos": 1109, "end_char_pos": 1112 }, { "type": "A", "before": null, "after": "and future research directions", "start_char_pos": 1152, "end_char_pos": 1152 } ]
[ 0, 169, 283, 561, 931, 1055 ]
1101.0761
1
This paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class . The method of partitioning a set of vectors along a sequence is introduced and acts as one of the main analytical tools .
This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class , or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. The method of partitioning a set of vectors along a sequence is introduced and acts as one of the main analytical tools throughout .
[ { "type": "A", "before": null, "after": "is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper", "start_char_pos": 11, "end_char_pos": 11 }, { "type": "R", "before": ". The", "after": ", or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. The", "start_char_pos": 152, "end_char_pos": 157 }, { "type": "A", "before": null, "after": "throughout", "start_char_pos": 274, "end_char_pos": 274 } ]
[ 0, 153 ]
1101.0761
2
This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class, or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. The method of partitioning a set of vectors along a sequence is introduced and acts as one of the main analytical tools throughout .
This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class, or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. One of our main analytical tools, which is introduced here, will be a method for partitioning the relevant monomials of the dynamical system along sequences of trajectory points into classes with comparable growths. This will allow us to conclude that if a trajectory converges to the boundary, then a whole family of Lyapunov functions decrease along the trajectory, a fact that will allow us to overcome the fact that the usual Lyapunov function used in this setting does not diverge to infinity as trajectories converge to the boundary, which has been the technical sticking point to a proof of the Global Attractor Conjecture over the years .
[ { "type": "R", "before": "The method of partitioning a set of vectors along a sequence is introduced and acts as one of the main analytical tools throughout", "after": "One of our main analytical tools, which is introduced here, will be a method for partitioning the relevant monomials of the dynamical system along sequences of trajectory points into classes with comparable growths. This will allow us to conclude that if a trajectory converges to the boundary, then a whole family of Lyapunov functions decrease along the trajectory, a fact that will allow us to overcome the fact that the usual Lyapunov function used in this setting does not diverge to infinity as trajectories converge to the boundary, which has been the technical sticking point to a proof of the Global Attractor Conjecture over the years", "start_char_pos": 588, "end_char_pos": 718 } ]
[ 0, 109, 299, 448, 587 ]
1101.0761
3
This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class, or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. One of our main analytical tools, which is introduced here, will be a method for partitioning the relevant monomials of the dynamical system along sequences of trajectory points into classes with comparable growths. This will allow us to conclude that if a trajectory converges to the boundary, then a whole family of Lyapunov functions decrease along the trajectory , a fact that will allow us to overcome the fact that the usual Lyapunov function used in this setting does not diverge to infinity as trajectories converge to the boundary , which has been the technical sticking point to a proof of the Global Attractor Conjecture over the years .
This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems. Specifically, this paper provides a proof of the Global Attractor Conjecture in the setting where the underlying reaction diagram consists of a single linkage class, or connected component. The conjecture dates back to the early 1970s and is the most well known and important open problem in the field of chemical reaction network theory. The resolution of the conjecture has important biological and mathematical implications in both the deterministic and stochastic settings. One of our main analytical tools, which is introduced here, will be a method for partitioning the relevant monomials of the dynamical system along sequences of trajectory points into classes with comparable growths. We will use this method to conclude that if a trajectory converges to the boundary, then a whole family of Lyapunov functions decrease along the trajectory . This will allow us to overcome the fact that the usual Lyapunov functions of chemical reaction network theory are bounded on the boundary of the positive orthant , which has been the technical sticking point to a proof of the Global Attractor Conjecture in the past .
[ { "type": "R", "before": "This will allow us", "after": "We will use this method", "start_char_pos": 804, "end_char_pos": 822 }, { "type": "R", "before": ", a fact that", "after": ". This", "start_char_pos": 955, "end_char_pos": 968 }, { "type": "R", "before": "function used in this setting does not diverge to infinity as trajectories converge to the boundary", "after": "functions of chemical reaction network theory are bounded on the boundary of the positive orthant", "start_char_pos": 1028, "end_char_pos": 1127 }, { "type": "R", "before": "over the years", "after": "in the past", "start_char_pos": 1220, "end_char_pos": 1234 } ]
[ 0, 109, 299, 448, 587, 803 ]
1101.0945
1
Portfolio turnpikes state that, as the investment horizon increases, optimal portfolios for generic utilities converge to those of isoelastic utilities. This paper proves three kinds of turnpikes. The abstract turnpike, valid in a general semimartingale setting, states that final payoffs and portfolios converge under their myopic probabilities. Diffusion models with several assets and a single state variable lead to the classic turnpike , in which optimal portfolios converge under the physical probability , and to the explicit turnpike , which identifies the limit of optimal portfolios in terms of the solution of an ergodic HJB equation.
Portfolio turnpikes state that, as the investment horizon increases, optimal portfolios for generic utilities converge to those of isoelastic utilities. This paper proves three kinds of turnpikes. In a general semimartingale setting, the abstract turnpike states that optimal final payoffs and portfolios converge under their myopic probabilities. In diffusion models with several assets and a single state variable , the classic turnpike demonstrates that optimal portfolios converge under the physical probability ; meanwhile the explicit turnpike identifies the limit of finite-horizon optimal portfolios as a long-run myopic portfolio defined in terms of the solution of an ergodic HJB equation.
[ { "type": "R", "before": "The abstract turnpike, valid in", "after": "In", "start_char_pos": 197, "end_char_pos": 228 }, { "type": "R", "before": "states that", "after": "the abstract turnpike states that optimal", "start_char_pos": 263, "end_char_pos": 274 }, { "type": "R", "before": "Diffusion", "after": "In diffusion", "start_char_pos": 347, "end_char_pos": 356 }, { "type": "R", "before": "lead to", "after": ",", "start_char_pos": 412, "end_char_pos": 419 }, { "type": "R", "before": ", in which", "after": "demonstrates that", "start_char_pos": 441, "end_char_pos": 451 }, { "type": "R", "before": ", and to", "after": "; meanwhile", "start_char_pos": 511, "end_char_pos": 519 }, { "type": "D", "before": ", which", "after": null, "start_char_pos": 542, "end_char_pos": 549 }, { "type": "R", "before": "optimal portfolios", "after": "finite-horizon optimal portfolios as a long-run myopic portfolio defined", "start_char_pos": 574, "end_char_pos": 592 } ]
[ 0, 152, 196, 346 ]
1101.1058
1
Actomyosin contractility is essential for biological force generation, and is well understood in highly ordered structures such as striated muscle. In vitro experiments have shown that non-sarcomeric bundles comprised only of F-actin and myosin thick filaments can also display contractile behavior , which cannot be described by standard muscle models. Here we investigate the microscopic symmetriesunderlying this process in large non-sarcomeric bundles with long actin filaments. We prove that contractile behavior requires non-identical motors that generate large enough forces to probe the nonlinear elastic behavior of F-actin. A simple disordered bundle model demonstrates a contraction mechanism based on these assumptions and predicts realistic bundle deformations. Recent experimental observations of F-actin buckling in in vitro contractile bundlessupport our model .
Actomyosin contractility is essential for biological force generation, and is well understood in URLanized structures such as striated muscle. Additionally, actomyosin bundles devoid of URLanization are known to contract both in vivo and in vitro , which cannot be described by standard muscle models. To narrow down the search for possible contraction mechanisms in these systems, we investigate their microscopic symmetries. We show that contractile behavior requires non-identical motors that generate large enough forces to probe the nonlinear elastic behavior of F-actin. This suggests a role for filament buckling in the contraction of these bundles, consistent with recent experimental results on reconstituted actomyosin bundles .
[ { "type": "R", "before": "highly ordered", "after": "URLanized", "start_char_pos": 97, "end_char_pos": 111 }, { "type": "R", "before": "In vitro experiments have shown that non-sarcomeric bundles comprised only of F-actin and myosin thick filaments can also display contractile behavior", "after": "Additionally, actomyosin bundles devoid of URLanization are known to contract both in vivo and in vitro", "start_char_pos": 148, "end_char_pos": 298 }, { "type": "R", "before": "Here we investigate the microscopic symmetriesunderlying this process in large non-sarcomeric bundles with long actin filaments. We prove", "after": "To narrow down the search for possible contraction mechanisms in these systems, we investigate their microscopic symmetries. We show", "start_char_pos": 354, "end_char_pos": 491 }, { "type": "R", "before": "A simple disordered bundle model demonstrates a contraction mechanism based on these assumptions and predicts realistic bundle deformations. Recent experimental observations of F-actin buckling in in vitro contractile bundlessupport our model", "after": "This suggests a role for filament buckling in the contraction of these bundles, consistent with recent experimental results on reconstituted actomyosin bundles", "start_char_pos": 634, "end_char_pos": 876 } ]
[ 0, 147, 353, 482, 633, 774 ]
1101.1180
1
We develop a model to describe the force generated by an array of well- separated parallel biofilaments, such as actin filaments . The filaments are assumed to only be coupled through mechanical contact with a movable barrier. We calculate the filament density distribution and the force-velocity relation with a mean-field approach combined with simulations. We identify two regimes: a non-condensed regime at low force in which filaments are spread out spatially, and a condensed regime at high force in which filaments accumulate near the barrier. We confirm that in this model, the stall force is equal to N times the stall force of a single filament. However, surprisingly, for large N, we find that the velocity approaches zero at forces significantly lower than the stall force.
We develop a model to describe the force generated by the polymerization of an array of parallel biofilaments . The filaments are assumed to be coupled only through mechanical contact with a movable barrier. We calculate the filament density distribution and the force-velocity relation with a mean-field approach combined with simulations. We identify two regimes: a non-condensed regime at low force in which filaments are spread out spatially, and a condensed regime at high force in which filaments accumulate near the barrier. We confirm a result previously known from other related studies, namely that the stall force is equal to N times the stall force of a single filament. In the model studied here, the approach to stalling is very slow, and the velocity is practically zero at forces significantly lower than the stall force.
[ { "type": "A", "before": null, "after": "the polymerization of", "start_char_pos": 54, "end_char_pos": 54 }, { "type": "R", "before": "well- separated parallel biofilaments, such as actin filaments", "after": "parallel biofilaments", "start_char_pos": 67, "end_char_pos": 129 }, { "type": "R", "before": "only be coupled", "after": "be coupled only", "start_char_pos": 161, "end_char_pos": 176 }, { "type": "R", "before": "that in this model,", "after": "a result previously known from other related studies, namely that", "start_char_pos": 563, "end_char_pos": 582 }, { "type": "R", "before": "However, surprisingly, for large N, we find that the velocity approaches", "after": "In the model studied here, the approach to stalling is very slow, and the velocity is practically", "start_char_pos": 657, "end_char_pos": 729 } ]
[ 0, 131, 227, 360, 551, 656 ]
1101.1273
1
Genetic interactions have been widely used to define functional relationships between proteins and pathways. In this study, we demonstrate that yeast synthetic lethal genetic interactions can be explained by the genetic interactions between domains of those proteins. The domain genetic interactions rarely overlap with the domain physical interactions from iPfam database and provide a complementary view about domain relationships. Moreover, we find that domains in multidomain yeast proteins contribute to their genetic interactions differently. The domain genetic interactions help more precisely define the function related to the synthetic lethal genetic interactions, and then help understand how domains contribute to different functionalities of multidomain proteins. Using the probabilities of domain genetic interactions, we are able to achieve high precision and low false positive rate in predicting genome-wide yeast synthetic lethal genetic interactions. Furthermore, we have also identified novel compensatory pathways from the predicted synthetic lethal genetic interactions. Our study significantly improves the understanding of yeast mulitdomain proteins, the synthetic lethal genetic interactions and the global functional relationships between proteins and pathways.
Genetic interactions have been widely used to define functional relationships between proteins and pathways. In this study, we demonstrated that yeast synthetic lethal genetic interactions can be explained by the genetic interactions between domains of those proteins. The domain genetic interactions rarely overlap with the domain physical interactions from iPfam database and provide a complementary view about domain relationships. Moreover, we found that domains in multidomain yeast proteins contribute to their genetic interactions differently. The domain genetic interactions help more precisely define the function related to the synthetic lethal genetic interactions, and then help understand how domains contribute to different functionalities of multidomain proteins. Using the probabilities of domain genetic interactions, we were able to predict novel yeast synthetic lethal genetic interactions. Furthermore, we had also identified novel compensatory pathways from the predicted synthetic lethal genetic interactions. Our study significantly improved the understanding of yeast mulitdomain proteins, the synthetic lethal genetic interactions and the functional relationships between proteins and pathways.
[ { "type": "R", "before": "demonstrate", "after": "demonstrated", "start_char_pos": 127, "end_char_pos": 138 }, { "type": "R", "before": "find", "after": "found", "start_char_pos": 447, "end_char_pos": 451 }, { "type": "R", "before": "are able to achieve high precision and low false positive rate in predicting genome-wide", "after": "were able to predict novel", "start_char_pos": 836, "end_char_pos": 924 }, { "type": "R", "before": "have", "after": "had", "start_char_pos": 986, "end_char_pos": 990 }, { "type": "R", "before": "improves", "after": "improved", "start_char_pos": 1117, "end_char_pos": 1125 }, { "type": "D", "before": "global", "after": null, "start_char_pos": 1225, "end_char_pos": 1231 } ]
[ 0, 108, 267, 433, 548, 776, 969, 1092 ]
1101.1707
1
Countries differ in the diversification of their exports and products differ in the number of countries that export them -their ubiquity.We document a new stylized fact in the global pattern of exports: a relationship between the diversification of a country 's exports and the ubiquity of its products. We argue that this is not implied by current theories of international trade and show that it is not a consequence of the heterogeneity in the level of diversification of countries or of the heterogeneity in the ubiquityof products. We account for this by constructing a model that assumes that each product requires a large number of non-tradable inputs, or capabilities, and that a country can only make the products for which it has all the requisite capabilities. Products differ in the number and specific nature of the capabilities they require, as countries differ in the number/nature of the capabilities they have. Products that require more capabilities are accessible to fewer countries (are less ubiquitous), while countries that have more capabilities have what is required to make more products (are more diversified). Our model implies that the return to the accumulation of new capabilities increases exponentially with the number of capabilities that a country has and that this convexity increases when either the total number of capabilities that exist in the world increases or the average complexity of products (the number of capabilities they require) increases. This defines a "quiescence trap" in which countries with few capabilities have negligible returns to the accumulation of new capabilities , while countrieswith many capabilities experience large returns to the accumulation of additional capabilities . We calibrate the model to three different sets and show that it reproduces the empirically observed distributions. We conclude that in our world the quiescence trap is strong .
Much of the analysis of economic growth has focused on the study of aggregate output. Here, we deviate from this tradition and look instead at the structure of output embodied in the network connecting countries to the products that they export.We characterize this network using four structural features: the negative relationship between the diversification of a country and the average ubiquity of its exports, and the non-normal distributions for product ubiquity, country diversification and product co-export. We model the structure of the network by assuming that products require a large number of non-tradable inputs, or capabilities, and that countries differ in the completeness of the set of capabilities they have. We solve the model assuming that the probability that a country has a capability and that a product requires a capability are constant and calibrate it to the data to find that it accounts well for all of the network features except for the heterogeneity in the distribution of country diversification. In the light of the model, this is evidence of a large heterogeneity in the distribution of capabilities across countries. Finally, we show that the model implies that the increase in diversification that is expected from the accumulation of a small number of capabilities is small for countries that have a few of them and large for those with many. This implies that the forces that help drive divergence in product diversity increase with the complexity of the global economy when capabilities travel poorly .
[ { "type": "R", "before": "Countries differ in the diversification of their exports and products differ in the number of countries that export them -their ubiquity.We document a new stylized fact in the global pattern of exports: a", "after": "Much of the analysis of economic growth has focused on the study of aggregate output. Here, we deviate from this tradition and look instead at the structure of output embodied in the network connecting countries to the products that they export.We characterize this network using four structural features: the negative", "start_char_pos": 0, "end_char_pos": 204 }, { "type": "R", "before": "'s exports and the", "after": "and the average", "start_char_pos": 259, "end_char_pos": 277 }, { "type": "R", "before": "products. We argue that this is not implied by current theories of international trade and show that it is not a consequence of the heterogeneity in the level of diversification of countries or of the heterogeneity in the ubiquityof products. We account for this by constructing a model that assumes that each product requires", "after": "exports, and the non-normal distributions for product ubiquity, country diversification and product co-export. We model the structure of the network by assuming that products require", "start_char_pos": 294, "end_char_pos": 620 }, { "type": "D", "before": "a country can only make the products for which it has all the requisite capabilities. Products differ in the number and specific nature of the capabilities they require, as", "after": null, "start_char_pos": 686, "end_char_pos": 858 }, { "type": "R", "before": "number/nature of the", "after": "completeness of the set of", "start_char_pos": 883, "end_char_pos": 903 }, { "type": "R", "before": "Products that require more capabilities are accessible to fewer countries (are less ubiquitous), while countries that have more capabilities have what is required to make more products (are more diversified). Our model implies that the return to the accumulation of new capabilities increases exponentially with the number of capabilities", "after": "We solve the model assuming that the probability", "start_char_pos": 928, "end_char_pos": 1266 }, { "type": "R", "before": "and that this convexity increases when either the total number of capabilities that exist in the world increases or the average complexity of products (the number of capabilities they require) increases. This defines a \"quiescence trap\" in which countries with few capabilities have negligible returns to the accumulation of new capabilities , while countrieswith many capabilities experience large returns to the accumulation of additional capabilities . We calibrate the model to three different sets and show that it reproduces the empirically observed distributions. We conclude that in our world the quiescence trap is strong", "after": "a capability and that a product requires a capability are constant and calibrate it to the data to find that it accounts well for all of the network features except for the heterogeneity in the distribution of country diversification. In the light of the model, this is evidence of a large heterogeneity in the distribution of capabilities across countries. Finally, we show that the model implies that the increase in diversification that is expected from the accumulation of a small number of capabilities is small for countries that have a few of them and large for those with many. This implies that the forces that help drive divergence in product diversity increase with the complexity of the global economy when capabilities travel poorly", "start_char_pos": 1286, "end_char_pos": 1916 } ]
[ 0, 137, 303, 536, 771, 927, 1136, 1489, 1741, 1856 ]
1101.1893
1
Random Boolean networks (RBNs) have been a popular model of genetic regulatory networks for more than four decades. However, most RBN studies have been made with regular topologies, while real regulatory networks have been found to be modular. In this work, we extend classical RBNs to define modular RBNs. Statistical experiments and analytical results show that modularity has a strong effect on the properties of RBNs. In particular, modular RBNs are closer to criticality than regular RBNs.
Random Boolean networks (RBNs) have been a popular model of genetic regulatory networks for more than four decades. However, most RBN studies have been made with random topologies, while real regulatory networks have been found to be modular. In this work, we extend classical RBNs to define modular RBNs. Statistical experiments and analytical results show that modularity has a strong effect on the properties of RBNs. In particular, modular RBNs have more attractors and are closer to criticality when chaotic dynamics would be expected, compared to classical RBNs.
[ { "type": "R", "before": "regular", "after": "random", "start_char_pos": 162, "end_char_pos": 169 }, { "type": "A", "before": null, "after": "have more attractors and", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "R", "before": "than regular", "after": "when chaotic dynamics would be expected, compared to classical", "start_char_pos": 477, "end_char_pos": 489 } ]
[ 0, 115, 243, 421 ]
1101.3493
1
Due to the vast space of possible networks and the relatively small amount of data available, inferring genetic networks from gene expression data is one of the most challenging work in the post-genomic era . In this field, Gaussian Graphical Model (GGM) provides a convenient framework for the discovery of biological networks. In this paper, we propose an original approach for inferring gene regulation network using a robust biological prior on structure in order to limit the set of candidate networks. Pathways, that represent biological knowledge on the regulatory networks, will be used as an informative prior knowledge to drive Network Inference. This approach is based on the selection of a relevant set of genes, called the "molecular signature", associated with a condition of interest (for instance, the genes involved in disease development). In this context, differential expression analysis is a well established strategy. However outcome signatures are often not consistent and show little overlap between studies. Thus, we will dedicate the first part of our work to the improvement of the standard process of biomarker identification to guarantee the robustness and reproducibility of the molecular signature. Our approach enables to compare the network inferred between two conditions of interest (for instance case and control networks) and help along the biological interpretation of results. Thus it allows to identify differential regulations that occur in these conditions. We illustrate the proposed approach by applying our method to Network Inference in breast cancer's response to treatment study .
Inferring genetic networks from gene expression data is one of the most challenging work in the post-genomic era , partly due to the vast space of possible networks and the relatively small amount of data available . In this field, Gaussian Graphical Model (GGM) provides a convenient framework for the discovery of biological networks. In this paper, we propose an original approach for inferring gene regulation networks using a robust biological prior on their structure in order to limit the set of candidate networks. Pathways, that represent biological knowledge on the regulatory networks, will be used as an informative prior knowledge to drive Network Inference. This approach is based on the selection of a relevant set of genes, called the "molecular signature", associated with a condition of interest (for instance, the genes involved in disease development). In this context, differential expression analysis is a well established strategy. However outcome signatures are often not consistent and show little overlap between studies. Thus, we will dedicate the first part of our work to the improvement of the standard process of biomarker identification to guarantee the robustness and reproducibility of the molecular signature. Our approach enables to compare the networks inferred between two conditions of interest (for instance case and control networks) and help along the biological interpretation of results. Thus it allows to identify differential regulations that occur in these conditions. We illustrate the proposed approach by applying our method to a study of breast cancer's response to treatment .
[ { "type": "R", "before": "Due to the vast space of possible networks and the relatively small amount of data available, inferring", "after": "Inferring", "start_char_pos": 0, "end_char_pos": 103 }, { "type": "A", "before": null, "after": ", partly due to the vast space of possible networks and the relatively small amount of data available", "start_char_pos": 207, "end_char_pos": 207 }, { "type": "R", "before": "network", "after": "networks", "start_char_pos": 407, "end_char_pos": 414 }, { "type": "A", "before": null, "after": "their", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "R", "before": "network", "after": "networks", "start_char_pos": 1268, "end_char_pos": 1275 }, { "type": "R", "before": "Network Inference in", "after": "a study of", "start_char_pos": 1564, "end_char_pos": 1584 }, { "type": "D", "before": "study", "after": null, "start_char_pos": 1623, "end_char_pos": 1628 } ]
[ 0, 209, 329, 509, 658, 859, 941, 1034, 1231, 1417, 1501 ]
1101.3572
1
We pursue an inverse approach to utility theory and consumption & investment problems. Instead of specifying an agent's utility function and deriving her actions, we assume we observe her actions (i.e. her consumption and investment strategies) and ask if it is possible to derive a utility function for which the observed behaviour is optimal. We work in continuous time both in a deterministic and stochastic setting. In a deterministic setup, we find that there are infinitely many utility functions generating a given consumption pattern. In the stochastic setting of the Black-Scholes complete market it turns out that the consumption and investment strategies have to satisfy a consistency condition (PDE) if they come from a classical utility maximisation problem. We further show that agent's important characteristics such as attitude towards risk (e.g. DARA) can be directly deduced from her consumption/investment choices.
We pursue an inverse approach to utility theory and consumption & investment problems. Instead of specifying an agent's utility function and deriving her actions, we assume we observe her actions (i.e. her consumption and investment strategies) and ask if it is possible to derive a utility function for which the observed behaviour is optimal. We work in continuous time both in a deterministic and stochastic setting. In a deterministic setup, we find that there are infinitely many utility functions generating a given consumption pattern. In the stochastic setting of the Black-Scholes complete market it turns out that the consumption and investment strategies have to satisfy a consistency condition (PDE) if they are to come from a classical utility maximisation problem. We show further that important characteristics of the agent such as her attitude towards risk (e.g. DARA) can be deduced directly from her consumption/investment choices.
[ { "type": "A", "before": null, "after": "are to", "start_char_pos": 720, "end_char_pos": 720 }, { "type": "R", "before": "further show that agent's important characteristics such as", "after": "show further that important characteristics of the agent such as her", "start_char_pos": 776, "end_char_pos": 835 }, { "type": "R", "before": "directly deduced", "after": "deduced directly", "start_char_pos": 877, "end_char_pos": 893 } ]
[ 0, 86, 344, 419, 542, 772 ]
1101.3617
1
We propose a very simple (almost linear) stochastic map model of economic dynamics. In the last decade, an array of observations in economics has been investigated in the econophysics literature, a major example being the universal features of inequality in terms of income and wealth. Another area of inquiry is the formation of opinion in a society. Our proposed model can generate the Gamma-like and the power law distributions as has been observed in the real data of income and wealth distributions . Also, it is able to produce a non-trivial phase transition in the opinion of a society (opinion formation). A number of physical models also generates similar results. In particular, the kinetic exchange models have been especially successful in this regard. Therefore, we compare the results obtained from these two approaches and point out a number of new features obtained from the map model.
We propose a stochastic map model of economic dynamics. In the last decade, an array of observations in economics has been investigated in the econophysics literature, a major example being the universal features of inequality in terms of income and wealth. Another area of inquiry is the formation of opinion in a society. The proposed model attempts to produce positively skewed distributions and the power law distributions as has been observed in the real data of income and wealth . Also, it shows a non-trivial phase transition in the opinion of a society (opinion formation). A number of physical models also generates similar results. In particular, the kinetic exchange models have been especially successful in this regard. Therefore, we compare the results obtained from these two approaches and discuss a number of new features and drawbacks of this model.
[ { "type": "D", "before": "very simple (almost linear)", "after": null, "start_char_pos": 13, "end_char_pos": 40 }, { "type": "R", "before": "Our proposed model can generate the Gamma-like", "after": "The proposed model attempts to produce positively skewed distributions", "start_char_pos": 352, "end_char_pos": 398 }, { "type": "D", "before": "distributions", "after": null, "start_char_pos": 490, "end_char_pos": 503 }, { "type": "R", "before": "is able to produce", "after": "shows", "start_char_pos": 515, "end_char_pos": 533 }, { "type": "R", "before": "point out", "after": "discuss", "start_char_pos": 838, "end_char_pos": 847 }, { "type": "R", "before": "obtained from the map", "after": "and drawbacks of this", "start_char_pos": 873, "end_char_pos": 894 } ]
[ 0, 83, 285, 351, 505, 613, 673, 764 ]
1101.4211
1
This paper focuses on designing and analyzing throughput-optimal scheduling policies that avoid the use of per-flow information, maintain one single data queue for each link, exploit only local information, and potentially improve the delay performance, for multi-hop wireless networks under general interference constraints. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow information (which may be difficult to obtain and maintain), maintains per-flow (or per-destination) queues at each node, relies on constant exchange of queue length information among neighboring nodes to calculate link weights, and may result in poor delay performance . In contrast, the proposed schemes can circumvent these drawbacks while guaranteeing throughput optimality. We rigorously analyze the throughput performance of the proposed schemes and show that they are throughput-optimal using fluid limit techniques via an inductive argument . We also conduct simulations to show that the proposed schemes can substantially improve the delay performance.
This paper focuses on designing throughput-optimal scheduling policies that avoid using per-flow or per-destination information, maintain a single data queue for each link, exploit only local information, for multi-hop wireless networks under general interference constraints. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information (which may be difficult to obtain and maintain), maintains complex data structure at each node, relies on constant exchange of queue length information among neighboring nodes , and results in poor delay performance in certain scenarios . In contrast, the proposed schemes can circumvent these drawbacks while guaranteeing throughput optimality. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal . We also conduct simulations to show that the proposed schemes can substantially improve the delay performance.
[ { "type": "D", "before": "and analyzing", "after": null, "start_char_pos": 32, "end_char_pos": 45 }, { "type": "R", "before": "the use of", "after": "using", "start_char_pos": 96, "end_char_pos": 106 }, { "type": "A", "before": null, "after": "or per-destination", "start_char_pos": 116, "end_char_pos": 116 }, { "type": "R", "before": "one", "after": "a", "start_char_pos": 139, "end_char_pos": 142 }, { "type": "D", "before": "and potentially improve the delay performance,", "after": null, "start_char_pos": 208, "end_char_pos": 254 }, { "type": "A", "before": null, "after": "or per-destination", "start_char_pos": 418, "end_char_pos": 418 }, { "type": "R", "before": "per-flow (or per-destination) queues", "after": "complex data structure", "start_char_pos": 490, "end_char_pos": 526 }, { "type": "R", "before": "to calculate link weights, and may result", "after": ", and results", "start_char_pos": 621, "end_char_pos": 662 }, { "type": "A", "before": null, "after": "in certain scenarios", "start_char_pos": 689, "end_char_pos": 689 }, { "type": "D", "before": "throughput", "after": null, "start_char_pos": 825, "end_char_pos": 835 }, { "type": "D", "before": "and show that they are throughput-optimal", "after": null, "start_char_pos": 872, "end_char_pos": 913 }, { "type": "A", "before": null, "after": "and show that they are throughput-optimal", "start_char_pos": 969, "end_char_pos": 969 } ]
[ 0, 326, 691, 798, 971 ]
1101.4211
2
This paperfocuses on designing throughput-optimal scheduling policies that avoid using per-flow or per-destination information, maintain a single data queue for each link, exploit only local information, for multi-hop wireless networks under general interference constraints . Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information (which may be difficult to obtain and maintain ), maintains complex data structure at each node, relies on constant exchange of queue length information among neighboring nodes, and results in poor delay performance in certain scenarios. In contrast, the proposed schemes can circumvent these drawbacks while guaranteeing throughput optimality. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. We also conduct simulations to show that the proposed schemes can substantially improve the delay performance .
In this paper, we consider the problem of link scheduling in multi-hop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use per-flow or per-destination information, maintain a single data queue for each link, and exploit only local information, while guaranteeing throughput optimality . Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information . It is usually difficult to obtain and maintain this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a complex data structure at each node, keeps exchanging queue length information among neighboring nodes, and commonly results in poor delay performance . In this paper, we propose scheduling schemes that can circumvent these drawbacks and guarantee throughput optimality. These schemes use either the readily available hop-count information or only the local information for each link. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. We also conduct simulations to validate our theoretical results in various settings, and show that the proposed schemes can substantially improve the delay performance in most scenarios .
[ { "type": "R", "before": "This paperfocuses on designing throughput-optimal scheduling policies that avoid using", "after": "In this paper, we consider the problem of link scheduling in multi-hop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use", "start_char_pos": 0, "end_char_pos": 86 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 172, "end_char_pos": 172 }, { "type": "R", "before": "for multi-hop wireless networks under general interference constraints", "after": "while guaranteeing throughput optimality", "start_char_pos": 205, "end_char_pos": 275 }, { "type": "R", "before": "(which may be", "after": ". It is usually", "start_char_pos": 400, "end_char_pos": 413 }, { "type": "R", "before": "), maintains", "after": "this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a", "start_char_pos": 447, "end_char_pos": 459 }, { "type": "R", "before": "relies on constant exchange of", "after": "keeps exchanging", "start_char_pos": 497, "end_char_pos": 527 }, { "type": "A", "before": null, "after": "commonly", "start_char_pos": 582, "end_char_pos": 582 }, { "type": "R", "before": "in certain scenarios. In contrast, the proposed schemes", "after": ". In this paper, we propose scheduling schemes that", "start_char_pos": 617, "end_char_pos": 672 }, { "type": "R", "before": "while guaranteeing", "after": "and guarantee", "start_char_pos": 704, "end_char_pos": 722 }, { "type": "A", "before": null, "after": "These schemes use either the readily available hop-count information or only the local information for each link.", "start_char_pos": 746, "end_char_pos": 746 }, { "type": "A", "before": null, "after": "validate our theoretical results in various settings, and", "start_char_pos": 938, "end_char_pos": 938 }, { "type": "A", "before": null, "after": "in most scenarios", "start_char_pos": 1018, "end_char_pos": 1018 } ]
[ 0, 277, 638, 745, 906 ]
1101.4523
1
In multi-class communication networks, traffic surges due to one class of users can significantly degrade the performance for other classes. During these transient periods, it is thus of crucial importance to implement priority mechanisms that conserve the quality of service experienced by the affected classes, while ensuring that the temporarily unstable class is not entirely neglected. In this paper, we examine -- for a suitably-scaled set of parameters -- the complex interaction occurring between several classes of traffic when an unstable class is penalized proportionally to its level of congestion . We characterize the evolution of the performance measures of the network from the moment the initial surge takes place until the system reaches its equilibrium. Using a time-space-transition-scaling , we show that the trajectories of the temporarily unstable class can be described by a differential equation, while those of the stable classes retain their stochastic nature. In particular, we show that the temporarily unstable class evolves at a time-scale which is much slower than that of the stable classes. Although the time-scales decouple, the dynamics of the temporarily unstable and the stable classes continue to influence one another. We further proceed to characterize the obtained differential equations for several simple network examples. In particular, the macroscopic asymptotic behavior of the unstable class allows us to gain important qualitative insights on how the bandwidth allocation affects performance . We illustrate these result on several toy examples and we finally build a penalization rule using these results for a network integrating streaming and elastic traffic.
In multi-class communication networks, traffic surges due to one class of users can significantly degrade the performance for other classes. During these transient periods, it is thus of crucial importance to implement priority mechanisms that conserve the quality of service experienced by the affected classes, while ensuring that the temporarily unstable class is not entirely neglected. In this paper, we examine the complex interaction occurring between several classes of traffic when classes obtain bandwidth proportionally to their incoming traffic . We characterize the evolution of the network from the moment the initial surge takes place until the system reaches its equilibrium. Using an appropriate scaling , we show that the trajectories of the temporarily unstable class can be described by a differential equation, while those of the stable classes retain their stochastic nature. A stochastic averaging phenomenon occurs and the dynamics of the temporarily unstable and the stable classes continue to influence one another. We further proceed to characterize the obtained differential equations and the stability region under this scaling for monotone networks . We illustrate these result on several toy examples and we finally build a penalization rule using these results for a network integrating streaming and elastic traffic.
[ { "type": "D", "before": "-- for a suitably-scaled set of parameters --", "after": null, "start_char_pos": 417, "end_char_pos": 462 }, { "type": "R", "before": "an unstable class is penalized proportionally to its level of congestion", "after": "classes obtain bandwidth proportionally to their incoming traffic", "start_char_pos": 537, "end_char_pos": 609 }, { "type": "D", "before": "performance measures of the", "after": null, "start_char_pos": 649, "end_char_pos": 676 }, { "type": "R", "before": "a time-space-transition-scaling", "after": "an appropriate scaling", "start_char_pos": 779, "end_char_pos": 810 }, { "type": "R", "before": "In particular, we show that the temporarily unstable class evolves at a time-scale which is much slower than that of the stable classes. Although the time-scales decouple, the", "after": "A stochastic averaging phenomenon occurs and the", "start_char_pos": 988, "end_char_pos": 1163 }, { "type": "R", "before": "for several simple network examples. In particular, the macroscopic asymptotic behavior of the unstable class allows us to gain important qualitative insights on how the bandwidth allocation affects performance", "after": "and the stability region under this scaling for monotone networks", "start_char_pos": 1330, "end_char_pos": 1540 } ]
[ 0, 140, 390, 611, 772, 987, 1124, 1258, 1366, 1542 ]
1101.4529
1
Understanding design principles of biomolecular recognition is a key question of molecular biology. Yet the enormous complexity and diversity of biological molecules hamper the efforts to gain a predictive ability for the free energy of protein-protein, protein-DNA, and protein-RNA binding. Here we predict that for a large class of biomolecular interactions it is possible to accurately estimate the relative free energy of binding based on the fluctuation properties of their energy spectra, even if a finite number of the energy levels is known. We show that the free energy of the system possessing a wider binding energy spectrum is almost surely lower compared to the system possessing a narrower energy spectrum. Our predictions imply that low-affinity binding scores, usually wasted in protein-protein and protein-DNA docking algorithms, can be efficiently utilized to compute the free energy .
Understanding design principles of biomolecular recognition is a key question of molecular biology. Yet the enormous complexity and diversity of biological molecules hamper the efforts to gain a predictive ability for the free energy of protein-protein, protein-DNA, and protein-RNA binding. Here , using a variant of the Derrida model, we predict that for a large class of biomolecular interactions , it is possible to accurately estimate the relative free energy of binding based on the fluctuation properties of their energy spectra, even if a finite number of the energy levels is known. We show that the free energy of the system possessing a wider binding energy spectrum is almost surely lower compared with the system possessing a narrower energy spectrum. Our predictions imply that low-affinity binding scores, usually wasted in protein-protein and protein-DNA docking algorithms, can be efficiently utilized to compute the free energy . Using the results of Rosetta docking simulations of protein-protein interactions from Andre et al., Proc. Natl. Acad. Sci. U.S.A. 105, 16148 (2008), we demonstrate the power of our predictions .
[ { "type": "A", "before": null, "after": ", using a variant of the Derrida model,", "start_char_pos": 297, "end_char_pos": 297 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "to", "after": "with", "start_char_pos": 670, "end_char_pos": 672 }, { "type": "A", "before": null, "after": ". Using the results of Rosetta docking simulations of protein-protein interactions from Andre et al., Proc. Natl. Acad. Sci. U.S.A. 105, 16148 (2008), we demonstrate the power of our predictions", "start_char_pos": 904, "end_char_pos": 904 } ]
[ 0, 99, 291, 551, 722 ]
1101.4548
1
It is argued that the simple trading strategy of leveraging or deleveraging an investment in the market portfolio cannot outperform the market. Such stochastic market efficiency places strong constraints on the possible stochastic properties of the market.Historical data confirm the hypothesis .
Peters (2011a) defined an optimal leverage which maximizes the time-average growth rate of an investment held at constant leverage. We test the hypothesis that this optimal leverage is attracted to 1, such that leveraging an investment in the market portfolio cannot yield long-run outperformance. Historical data support the hypothesis. This places a strong constraint on the stochastic properties of traded assets, which we call "leverage efficiency." Market conditions that deviate from leverage efficiency are unstable and may create leverage-driven bubbles. This result resolves the equity premium puzzle, informs interest rate setting, and constitutes a theory of noise in financial markets .
[ { "type": "R", "before": "It is argued that the simple trading strategy of leveraging or deleveraging", "after": "Peters (2011a) defined an optimal leverage which maximizes the time-average growth rate of an investment held at constant leverage. We test the hypothesis that this optimal leverage is attracted to 1, such that leveraging", "start_char_pos": 0, "end_char_pos": 75 }, { "type": "R", "before": "outperform the market. Such stochastic market efficiency places strong constraints on the possible", "after": "yield long-run outperformance. Historical data support the hypothesis. This places a strong constraint on the", "start_char_pos": 121, "end_char_pos": 219 }, { "type": "R", "before": "the market.Historical data confirm the hypothesis", "after": "traded assets, which we call \"leverage efficiency.\" Market conditions that deviate from leverage efficiency are unstable and may create leverage-driven bubbles. This result resolves the equity premium puzzle, informs interest rate setting, and constitutes a theory of noise in financial markets", "start_char_pos": 245, "end_char_pos": 294 } ]
[ 0, 143, 256 ]
1101.4548
2
Peters (2011a) defined an optimal leverage which maximizes the time-average growth rate of an investment held at constant leverage. We test the hypothesis that this optimal leverage is attracted to 1, such that leveraging an investment in the market portfolio cannot yield long-run outperformance. Historical data support the hypothesis. This places a strong constraint on the stochastic properties of traded assets, which we call "leverage efficiency." Market conditions that deviate from leverage efficiency are unstable and may create leverage-driven bubbles. This result resolves the equity premium puzzle , informs interest rate setting , and constitutes a theory of noise in financial markets .
Peters (2011a) defined an optimal leverage which maximizes the time-average growth rate of an investment held at constant leverage. It was hypothesized that this optimal leverage is attracted to 1, such that , e.g., leveraging an investment in the market portfolio cannot yield long-term outperformance. This places a strong constraint on the stochastic properties of prices of traded assets, which we call "leverage efficiency." Market conditions that deviate from leverage efficiency are unstable and may create leverage-driven bubbles. Here we expand on the hypothesis and its implications. These include a theory of noise that explains how systemic stability rules out smooth price changes at any pricing frequency; a resolution of the so-called equity premium puzzle ; a protocol for central bank interest rate setting to avoid leverage-driven price instabilities; and a method for detecting fraudulent investment schemes by exploiting differences between the stochastic properties of their prices and those of legitimately-traded assets. To submit the hypothesis to a rigorous test we choose price data from different assets: the S P500 index, Bitcoin, Berkshire Hathaway Inc., and Bernard L. Madoff Investment Securities LLC. Analysis of these data supports the hypothesis .
[ { "type": "R", "before": "We test the hypothesis", "after": "It was hypothesized", "start_char_pos": 132, "end_char_pos": 154 }, { "type": "A", "before": null, "after": ", e.g.,", "start_char_pos": 211, "end_char_pos": 211 }, { "type": "R", "before": "long-run outperformance. Historical data support the hypothesis.", "after": "long-term outperformance.", "start_char_pos": 274, "end_char_pos": 338 }, { "type": "A", "before": null, "after": "prices of", "start_char_pos": 403, "end_char_pos": 403 }, { "type": "R", "before": "This result resolves the", "after": "Here we expand on the hypothesis and its implications. These include a theory of noise that explains how systemic stability rules out smooth price changes at any pricing frequency; a resolution of the so-called", "start_char_pos": 565, "end_char_pos": 589 }, { "type": "R", "before": ", informs", "after": "; a protocol for central bank", "start_char_pos": 612, "end_char_pos": 621 }, { "type": "R", "before": ", and constitutes a theory of noise in financial markets", "after": "to avoid leverage-driven price instabilities; and a method for detecting fraudulent investment schemes by exploiting differences between the stochastic properties of their prices and those of legitimately-traded assets. To submit the hypothesis to a rigorous test we choose price data from different assets: the S", "start_char_pos": 644, "end_char_pos": 700 }, { "type": "A", "before": null, "after": "P500 index, Bitcoin, Berkshire Hathaway Inc., and Bernard L. Madoff Investment Securities LLC. Analysis of these data supports the hypothesis", "start_char_pos": 701, "end_char_pos": 701 } ]
[ 0, 131, 298, 338, 455, 564 ]
1101.5799
1
Multistability in biological systems is seen as a mechanism of cellular decision making. Biological systems are known in which several different kinases phosphorylate a single substrate and others where a single kinase phosphorylate several different substrates. Furthermore, phosphorylation in more than one site can be carried out by a unique kinase or, as in the case of priming kinases, different ones (and similarly for phosphatases) . In this paper we determine the emergence of multistationarity in small motifs that repeatedly occur in signaling pathways. Our modules are built on a one-site modification cycle and contain one or two cycles combined in all possible ways with the above features regarding the number of modification sites, and competition and non-specificity of enzymes, incorporated. We conclude that multistationarity arises whenever a single enzyme is responsible for catalyzing the modification of two different but linked substrates and that the presence of multiple steady states requires two opposing dynamics (e. g. phosphatase and kinase) acting on the same substrate. The mathematical modeling is based on mass-action kinetics and the conclusions are derived in full generality without restoring to simulations or random generation of parameters .
Multistationarity in biological systems is a mechanism of cellular decision making. In particular, signaling pathways regulated by protein phosphorylation display features that facilitate a variety of responses to different biological inputs. The features that lead to multistationarity are of particular interest to determine as well as the stability properties of the steady states . In this paper we determine conditions for the emergence of multistationarity in small motifs without feedback that repeatedly occur in signaling pathways. We derive an explicit mathematical relationship between the concentration of a chemical species at steady state and a conserved quantity of the system such as the total amount of substrate available. We show that the relation determines the number of steady states and provides a necessary condition for a steady state to be stable, that is, to be biologically attainable. Further, we identify characteristics of the motifs that lead to multistationarity, and extend the view that multistationarity in signaling pathways arises from multisite phosphorylation. Our approach relies on mass-action kinetics and the conclusions are drawn in full generality without resorting to simulations or random generation of parameters . The approach is extensible to other systems .
[ { "type": "R", "before": "Multistability", "after": "Multistationarity", "start_char_pos": 0, "end_char_pos": 14 }, { "type": "D", "before": "seen as", "after": null, "start_char_pos": 40, "end_char_pos": 47 }, { "type": "R", "before": "Biological systems are known in which several different kinases phosphorylate a single substrate and others where a single kinase phosphorylate several different substrates. Furthermore, phosphorylation in more than one site can be carried out by a unique kinase or, as in the case of priming kinases, different ones (and similarly for phosphatases)", "after": "In particular, signaling pathways regulated by protein phosphorylation display features that facilitate a variety of responses to different biological inputs. The features that lead to multistationarity are of particular interest to determine as well as the stability properties of the steady states", "start_char_pos": 89, "end_char_pos": 438 }, { "type": "A", "before": null, "after": "conditions for", "start_char_pos": 468, "end_char_pos": 468 }, { "type": "A", "before": null, "after": "without feedback", "start_char_pos": 520, "end_char_pos": 520 }, { "type": "R", "before": "Our modules are built on a one-site modification cycle and contain one or two cycles combined in all possible ways with the above features regarding the", "after": "We derive an explicit mathematical relationship between the concentration of a chemical species at steady state and a conserved quantity of the system such as the total amount of substrate available. We show that the relation determines the", "start_char_pos": 566, "end_char_pos": 718 }, { "type": "R", "before": "modification sites, and competition and non-specificity of enzymes, incorporated. We conclude that multistationarity arises whenever a single enzyme is responsible for catalyzing the modification of two different but linked substrates and that the presence of multiple steady states requires two opposing dynamics (e. g. phosphatase and kinase) acting on the same substrate. The mathematical modeling is based", "after": "steady states and provides a necessary condition for a steady state to be stable, that is, to be biologically attainable. Further, we identify characteristics of the motifs that lead to multistationarity, and extend the view that multistationarity in signaling pathways arises from multisite phosphorylation. Our approach relies", "start_char_pos": 729, "end_char_pos": 1138 }, { "type": "R", "before": "derived", "after": "drawn", "start_char_pos": 1187, "end_char_pos": 1194 }, { "type": "R", "before": "restoring", "after": "resorting", "start_char_pos": 1222, "end_char_pos": 1231 }, { "type": "A", "before": null, "after": ". The approach is extensible to other systems", "start_char_pos": 1282, "end_char_pos": 1282 } ]
[ 0, 88, 262, 440, 565, 810, 1103 ]
1101.5814
1
We propose and study a duplication-innovation-loss model of genome evolution taking into account biological roles of constituent genes . In our model numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional "recipe" for genome composition of the type "a spoonful of sugar for each egg yolk". The model jointly reproduces two known empirical scaling laws: the scale-free distribution of family sizes and the nonlinear scaling of the number of genes in certain functional categories with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws. This mathematical property of our model was subsequently confirmed in real-life prokaryotic genomes. To further support the assumptions of our model we present an empirical evidence of correlations between functional repertoires of prokaryotic genomes. The overall pattern of these correlations remains mostly unchanged when large and small genomes are analyzed separately. This hints at universality of biological mechanisms shaping up prokaryotic genomes irrespective of their size .
We propose and study a duplication-innovation-loss model of genome evolution taking into account biological roles of genes and their constituent domains . In our model numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional "recipe" for genome composition of the type "a spoonful of sugar for each egg yolk". The model jointly reproduces two known empirical laws: the powerlaw distribution of family sizes and the nonlinear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws. It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This exponent relation is subsequently confirmed by our bioinformatics analysis of prokaryotic genomes .
[ { "type": "R", "before": "constituent genes", "after": "genes and their constituent domains", "start_char_pos": 117, "end_char_pos": 134 }, { "type": "D", "before": "scaling", "after": null, "start_char_pos": 575, "end_char_pos": 582 }, { "type": "R", "before": "scale-free", "after": "powerlaw", "start_char_pos": 593, "end_char_pos": 603 }, { "type": "A", "before": null, "after": "(e.g. transcription factors)", "start_char_pos": 715, "end_char_pos": 715 }, { "type": "R", "before": "This mathematical property of our model was subsequently confirmed in real-life prokaryotic genomes. To further support the assumptions of our model we present an empirical evidence of correlations between functional repertoires of prokaryotic genomes. The overall pattern of these correlations remains mostly unchanged when large and small genomes are analyzed separately. This hints at universality of biological mechanisms shaping up prokaryotic genomes irrespective of their size", "after": "It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This exponent relation is subsequently confirmed by our bioinformatics analysis of prokaryotic genomes", "start_char_pos": 848, "end_char_pos": 1331 } ]
[ 0, 136, 228, 390, 525, 733, 847, 948, 1100, 1221 ]
1101.5814
2
We propose and study a duplication-innovation-loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional "recipe" for genome composition of the type "a spoonful of sugar for each egg yolk". The model jointly reproduces two known empirical laws: the powerlaw distribution of family sizes and the nonlinear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws . It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This exponent relation is subsequently confirmed by our bioinformatics analysis of prokaryotic genomes .
We propose and study a class-expansion/innovation/loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional "recipe" for genome composition of the type "a spoonful of sugar for each egg yolk". The model jointly reproduces two known empirical laws: the distribution of family sizes and the nonlinear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterising these two scaling laws , establishing a direct quantitative connection between evolutionary and functional categories . It predicts that functional categories that grow faster-than-linearly with genome size to be characterised by flatter-than-average family size distributions. This relation is confirmed by our bioinformatics analysis of prokaryotic genomes . This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes .
[ { "type": "R", "before": "duplication-innovation-loss", "after": "class-expansion/innovation/loss", "start_char_pos": 23, "end_char_pos": 50 }, { "type": "D", "before": "powerlaw", "after": null, "start_char_pos": 602, "end_char_pos": 610 }, { "type": "R", "before": "characterizing", "after": "characterising", "start_char_pos": 844, "end_char_pos": 858 }, { "type": "A", "before": null, "after": ", establishing a direct quantitative connection between evolutionary and functional categories", "start_char_pos": 882, "end_char_pos": 882 }, { "type": "R", "before": "characterized", "after": "characterised", "start_char_pos": 978, "end_char_pos": 991 }, { "type": "R", "before": "exponent relation is subsequently", "after": "relation is", "start_char_pos": 1048, "end_char_pos": 1081 }, { "type": "A", "before": null, "after": ". This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes", "start_char_pos": 1146, "end_char_pos": 1146 } ]
[ 0, 153, 245, 407, 542, 768, 884, 1042 ]
1101.5849
1
The importance of collateralization through the change of funding cost is now well recognized among practitioners. In this article, we have extended the previous studies of collateralized derivative pricing to more generic situation, that is asymmetric and imperfect collateralization as well as the associated CVA. We have presented approximate expressions for various cases using Gateaux derivative which allow straightforward numerical analysis. Numerical examples for CCS (cross currency swap)and IRS(interest rate swap) with asymmetric collateralization were also provided. They clearly show the practical relevance of sophisticated collateral management for financial firms. The valuation and the associated issue of collateral cost under the one-way CSA (or unilateral collateralization) , which is common when SSA (sovereign, supranational and agency) entities are involved, have been also studied . We have also discussed some generic implications of asymmetric collateralization for netting and resolution of information .
The importance of collateralization through the change of funding cost is now well recognized among practitioners. In this article, we have extended the previous studies of collateralized derivative pricing to more generic situation, that is asymmetric and imperfect collateralization with the associated counter party credit risk. By introducing the collateral coverage ratio, our framework can handle these issues in an unified manner. Although the resultant pricing formula becomes non-linear FBSDE and cannot be solve exactly, the fist order approximation is provided using Gateaux derivative . We have shown that it allows us to decompose the price of generic contract into three parts: market benchmark, bilateral credit value adjustment (CVA), and the collateral cost adjustment (CCA) independent from the credit risk. We have studied each term closely, and demonstrated the significant impact of asymmetric collateralization through CCA using the numerical examples .
[ { "type": "R", "before": "as well as the associated CVA. We have presented approximate expressions for various cases", "after": "with the associated counter party credit risk. By introducing the collateral coverage ratio, our framework can handle these issues in an unified manner. Although the resultant pricing formula becomes non-linear FBSDE and cannot be solve exactly, the fist order approximation is provided", "start_char_pos": 285, "end_char_pos": 375 }, { "type": "D", "before": "which allow straightforward numerical analysis. Numerical examples for CCS (cross currency swap)and IRS(interest rate swap) with asymmetric collateralization were also provided. They clearly show the practical relevance of sophisticated collateral management for financial firms. The valuation and the associated issue of collateral cost under the one-way CSA (or unilateral collateralization) , which is common when SSA (sovereign, supranational and agency) entities are involved, have been also studied", "after": null, "start_char_pos": 401, "end_char_pos": 905 }, { "type": "A", "before": null, "after": "We have shown that it allows us to decompose the price of generic contract into three parts: market benchmark, bilateral credit value adjustment (CVA), and the collateral cost adjustment (CCA) independent from the credit risk.", "start_char_pos": 908, "end_char_pos": 908 }, { "type": "R", "before": "also discussed some generic implications", "after": "studied each term closely, and demonstrated the significant impact", "start_char_pos": 917, "end_char_pos": 957 }, { "type": "R", "before": "for netting and resolution of information", "after": "through CCA using the numerical examples", "start_char_pos": 990, "end_char_pos": 1031 } ]
[ 0, 114, 315, 448, 578, 680, 907 ]
1102.0346
1
We consider a utility-maximization problem in a general semimartingale financial market , subject to constraints on the number of shares held in each risky asset. These constraints are modeled by predictable convex-set-valued processes whose values do not necessarily contain the origin, i.e., no risky investment at all may be inadmissible . Such a setup subsumes the classical constrained utility-maximization problem, as well as the problem where illiquid assets or a random endowment are present. Our main result establishes existence of the optimal trading strategies in such markets under no smoothness requirements on the utility function , and relates them to the corresponding dual objects. Moreover, we show that the dual optimization problem can be posed over a set of countably-additive probability measures, thus eschewing the need for the usual finitely-additive enlargement.
We consider a utility-maximization problem in a general semimartingale financial model , subject to constraints on the number of shares held in each risky asset. These constraints are modeled by predictable convex-set-valued processes whose values do not necessarily contain the origin, i.e., it may be inadmissible for an investor to hold no risky investment at all . Such a setup subsumes the classical constrained utility-maximization problem, as well as the problem where illiquid assets or a random endowment are present. Our main result establishes the existence of optimal trading strategies in such models under no smoothness requirements on the utility function . The result also shows that, up to attainment, the dual optimization problem can be posed over a set of countably-additive probability measures, thus eschewing the need for the usual finitely additive enlargement.
[ { "type": "R", "before": "market", "after": "model", "start_char_pos": 81, "end_char_pos": 87 }, { "type": "A", "before": null, "after": "it may be inadmissible for an investor to hold", "start_char_pos": 294, "end_char_pos": 294 }, { "type": "D", "before": "may be inadmissible", "after": null, "start_char_pos": 322, "end_char_pos": 341 }, { "type": "R", "before": "existence of the", "after": "the existence of", "start_char_pos": 530, "end_char_pos": 546 }, { "type": "R", "before": "markets", "after": "models", "start_char_pos": 582, "end_char_pos": 589 }, { "type": "R", "before": ", and relates them to the corresponding dual objects. Moreover, we show that", "after": ". The result also shows that, up to attainment,", "start_char_pos": 647, "end_char_pos": 723 }, { "type": "R", "before": "finitely-additive", "after": "finitely additive", "start_char_pos": 860, "end_char_pos": 877 } ]
[ 0, 162, 343, 501, 700 ]
1102.0346
2
We consider a utility-maximization problem in a general semimartingale financial model, subject to constraints on the number of shares held in each risky asset. These constraints are modeled by predictable convex-set-valued processes whose values do not necessarily contain the origin , i.e., it may be inadmissible for an investor to hold no risky investment at all. Such a setup subsumes the classical constrained utility-maximization problem, as well as the problem where illiquid assets or a random endowment are present. Our main result establishes the existence of optimal trading strategies in such models under no smoothness requirements on the utility function. The result also shows that, up to attainment, the dual optimization problem can be posed over a set of countably-additive probability measures, thus eschewing the need for the usual finitely additive enlargement.
We consider a utility-maximization problem in a general semimartingale financial model, subject to constraints on the number of shares held in each risky asset. These constraints are modeled by predictable convex-set-valued processes whose values do not necessarily contain the origin ; that is, it may be inadmissible for an investor to hold no risky investment at all. Such a setup subsumes the classical constrained utility-maximization problem, as well as the problem where illiquid assets or a random endowment are present. Our main result establishes the existence of optimal trading strategies in such models under no smoothness requirements on the utility function. The result also shows that, up to attainment, the dual optimization problem can be posed over a set of countably-additive probability measures, thus eschewing the need for the usual finitely-additive enlargement.
[ { "type": "R", "before": ", i.e.,", "after": "; that is,", "start_char_pos": 285, "end_char_pos": 292 }, { "type": "R", "before": "finitely additive", "after": "finitely-additive", "start_char_pos": 853, "end_char_pos": 870 } ]
[ 0, 160, 367, 525, 670 ]
1102.1186
1
We consider an optimal consumption - investment problem for financial markets of Black-Scholes 's type with the random coefficients . The existence and uniqueness theorem for the Hamilton-Jacobi-Bellman (HJB) equation is shown. We construct an iterative sequence of functions converging to the solutionof this equation. An optimal convergence rate for this sequence is found and sharp computable upper bounds for the approximation accuracy of the optimal consumption - investment strategies are obtained. It turns out that the optimal convergence rate in this case is super geometrical, i.e. is more rapid than any geometrical rate .
We consider an optimal investment and consumption problem for a Black-Scholes financial market with stochastic coefficients driven by a diffusion process. We assume that an agent makes consumption and investment decisions based on CRRA utility functions. The dynamical programming approach leads to an investigation of the Hamilton Jacobi Bellman (HJB) equation which is a highly non linear partial differential equation (PDE) of the second oder. By using the Feynman - Kac representation we prove uniqueness and smoothness of the solution. Moreover, we study the optimal convergence rate of the iterative numerical schemes for both the value function and the optimal portfolio. We show, that in this case, the optimal convergence rate is super geometrical, i.e. is more rapid than any geometrical one. We apply our results to a stochastic volatility financial market .
[ { "type": "R", "before": "consumption - investment problem for financial markets of", "after": "investment and consumption problem for a", "start_char_pos": 23, "end_char_pos": 80 }, { "type": "R", "before": "'s type with the random coefficients . The existence and uniqueness theorem for the Hamilton-Jacobi-Bellman", "after": "financial market with stochastic coefficients driven by a diffusion process. We assume that an agent makes consumption and investment decisions based on CRRA utility functions. The dynamical programming approach leads to an investigation of the Hamilton Jacobi Bellman", "start_char_pos": 95, "end_char_pos": 202 }, { "type": "R", "before": "is shown. We construct an iterative sequence of functions converging to the solutionof this equation. An", "after": "which is a highly non linear partial differential equation (PDE) of the second oder. By using the Feynman - Kac representation we prove uniqueness and smoothness of the solution. Moreover, we study the", "start_char_pos": 218, "end_char_pos": 322 }, { "type": "R", "before": "for this sequence is found and sharp computable upper bounds for the approximation accuracy of the optimal consumption - investment strategies are obtained. It turns out that", "after": "of the iterative numerical schemes for both the value function and", "start_char_pos": 348, "end_char_pos": 522 }, { "type": "A", "before": null, "after": "portfolio. We show, that in this case, the optimal", "start_char_pos": 535, "end_char_pos": 535 }, { "type": "D", "before": "in this case", "after": null, "start_char_pos": 553, "end_char_pos": 565 }, { "type": "R", "before": "rate", "after": "one. We apply our results to a stochastic volatility financial market", "start_char_pos": 628, "end_char_pos": 632 } ]
[ 0, 133, 227, 319, 504 ]
1102.2285
1
When the underlying stock price is a strict local martingale process under an equivalent local martingale measure, Black-Scholes PDE associated with an European option may have multiple solutions. In this paper, we study an approximation for the smallest hedging price of such an European option. Our results show that a class of rebate barrier options can be used for this approximation , when its rebate and barrier are chosen appropriately. An asymptotic convergence rate is also achieved when the knocked-out barrier moves to infinity under suitable conditions.
When the underlying stock price is a strict local martingale process under an equivalent local martingale measure, Black-Scholes PDE associated with an European option may have multiple solutions. In this paper, we study an approximation for the smallest hedging price of such an European option. Our results show that a class of rebate barrier options can be used for this approximation . Among of them, a speci?c rebate option is also provided with a continuous rebate function, which corresponds to the unique classical solution of the associated parabolic PDE. Such a construction makes existing numerical PDE techniques applicable for its computation. An asymptotic convergence rate is also studied when the knocked-out barrier moves to in?nity under suitable conditions.
[ { "type": "R", "before": ", when its rebate and barrier are chosen appropriately.", "after": ". Among of them, a speci?c rebate option is also provided with a continuous rebate function, which corresponds to the unique classical solution of the associated parabolic PDE. Such a construction makes existing numerical PDE techniques applicable for its computation.", "start_char_pos": 388, "end_char_pos": 443 }, { "type": "R", "before": "achieved", "after": "studied", "start_char_pos": 483, "end_char_pos": 491 }, { "type": "R", "before": "infinity", "after": "in?nity", "start_char_pos": 530, "end_char_pos": 538 } ]
[ 0, 196, 296, 443 ]
1102.2285
2
When the underlying stock price is a strict local martingale process under an equivalent local martingale measure, Black-Scholes PDE associated with an European option may have multiple solutions. In this paper, we study an approximation for the smallest hedging price of such an European option. Our results show that a class of rebate barrier options can be used for this approximation. Among of them, a speci?c rebate option is also provided with a continuous rebate function, which corresponds to the unique classical solution of the associated parabolic PDE. Such a construction makes existing numerical PDE techniques applicable for its computation. An asymptotic convergence rate is also studied when the knocked-out barrier moves to in?nity under suitable conditions.
When the underlying stock price is a strict local martingale process under an equivalent local martingale measure, Black-Scholes PDE associated with an European option may have multiple solutions. In this paper, we study an approximation for the smallest hedging price of such an European option. Our results show that a class of rebate barrier options can be used for this approximation. Among of them, a specific rebate option is also provided with a continuous rebate function, which corresponds to the unique classical solution of the associated parabolic PDE. Such a construction makes existing numerical PDE techniques applicable for its computation. An asymptotic convergence rate is also studied when the knocked-out barrier moves to infinity under suitable conditions.
[ { "type": "R", "before": "speci?c", "after": "specific", "start_char_pos": 406, "end_char_pos": 413 }, { "type": "R", "before": "in?nity", "after": "infinity", "start_char_pos": 741, "end_char_pos": 748 } ]
[ 0, 196, 296, 388, 563, 655 ]
1102.2922
1
A chemical reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. In this paper we provide a general framework for understanding the weak error of numerical approximation techniques in this setting. For such models, there is typically a wide variation in scales in that the different species and reaction rates vary over several orders of magnitude. Quantifying how different numerical approximation techniques behave in this settingtherefore requires that these scalings be taken into account in an appropriate manner . We quantify how the error of different methods depends upon both the natural scalings within a given system, and with the step-size of the numerical method. We show that Euler's method, also called explicit tau-leaping, acts as an order one method, in that the error decreases linearly with the step-size, and that the approximate midpoint method acts as either an order one or two method, depending on the relation between the time-step and the scalings in the system. Further, we introduce a new algorithm in this setting, the weak trapezoidal algorithm, which has been studied previously only in the diffusive setting , and prove that it is second order accurate in the size of the time discretization, making it the first of its kind. In the multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify or reduce a system. Therefore, numerical methods oftentimes are the only reasonable means by which such models can be understood in real time .
A chemical reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. For such models, there is typically a wide variation in temporal and other quantitative scales. In this multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify a system. Therefore, numerical methods oftentimes are the only reasonable means by which such models can be understood in real time. In this paper we provide a general framework for understanding the weak error of numerical approximation techniques in the multi-scale setting . We quantify how the error of three different methods depends upon both the natural scalings within a given system, and with the step-size of the numerical method. Further, we introduce a new algorithm in this setting, the weak trapezoidal algorithm, which was developed originally as an approximate method for diffusion processes , and prove that the leading order of the error process scales with the square of the time discretization, making it the first second order method in this setting .
[ { "type": "D", "before": "In this paper we provide a general framework for understanding the weak error of numerical approximation techniques in this setting.", "after": null, "start_char_pos": 325, "end_char_pos": 457 }, { "type": "R", "before": "scales in that the different species and reaction rates vary over several orders of magnitude. Quantifying how different", "after": "temporal and other quantitative scales. In this multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify a system. Therefore, numerical methods oftentimes are the only reasonable means by which such models can be understood in real time. In this paper we provide a general framework for understanding the weak error of", "start_char_pos": 514, "end_char_pos": 634 }, { "type": "R", "before": "behave in this settingtherefore requires that these scalings be taken into account in an appropriate manner", "after": "in the multi-scale setting", "start_char_pos": 670, "end_char_pos": 777 }, { "type": "A", "before": null, "after": "three", "start_char_pos": 809, "end_char_pos": 809 }, { "type": "D", "before": "We show that Euler's method, also called explicit tau-leaping, acts as an order one method, in that the error decreases linearly with the step-size, and that the approximate midpoint method acts as either an order one or two method, depending on the relation between the time-step and the scalings in the system.", "after": null, "start_char_pos": 938, "end_char_pos": 1250 }, { "type": "R", "before": "has been studied previously only in the diffusive setting", "after": "was developed originally as an approximate method for diffusion processes", "start_char_pos": 1344, "end_char_pos": 1401 }, { "type": "R", "before": "it is second order accurate in the size", "after": "the leading order of the error process scales with the square", "start_char_pos": 1419, "end_char_pos": 1458 }, { "type": "R", "before": "of its kind. In the multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify or reduce a system. Therefore, numerical methods oftentimes are the only reasonable means by which such models can be understood in real time", "after": "second order method in this setting", "start_char_pos": 1507, "end_char_pos": 1842 } ]
[ 0, 99, 324, 457, 608, 779, 937, 1250, 1519, 1720 ]
1102.2922
2
A chemical reaction network is a chemical system involving multiple reactions and chemical species. The simpleststochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. For such models , there is typically a wide variation in temporal and other quantitative scales. In this multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify a system. Therefore, numerical methodsoftentimes are the only reasonable means by which such models can be understood in real time . In this paper we provide a general framework for understanding the weak error of numerical approximation techniques in the multi-scale setting. We quantify how the error of three different methods depends upon both the natural scalings within a given system , and with the step-size of the numerical method. Further, we introduce a new algorithm in this setting, the weak trapezoidal algorithm, which was developed originally as an approximate method for diffusion processes, and prove that the leading order of the error process scales with the square of the time discretization , making it the first second order method in this setting .
The simplest, and most common, stochastic model for population processes, including those from biochemistry and cell biology, are continuous time Markov chains. Simulation of such models is often relatively straightforward as there are easily implementable methods for the generation of exact sample paths. However, when using ensemble averages to approximate expected values, the computational complexity can become prohibitive as the number of computations per path scales linearly with the number of jumps, or reactions, of the process. When such methods become computationally intractable, approximate methods, which introduce a bias, can become advantageous . In this paper , we provide a general framework for understanding the weak error , or bias, induced by different numerical approximation techniques . The analysis takes into account both the natural scalings within a given system and the step-size of the numerical method. Further, the weak trapezoidal method is introduced in the current setting, and is proven to be second order accurate in a weak sense , making it the first higher order method in this setting . Examples are provided demonstrating both the main analytical results, and the reduction in computational complexity achieved with the approximate methods .
[ { "type": "R", "before": "A chemical reaction network is a chemical system involving multiple reactions and chemical species. The simpleststochastic models of such networks treat the system as a", "after": "The simplest, and most common, stochastic model for population processes, including those from biochemistry and cell biology, are", "start_char_pos": 0, "end_char_pos": 168 }, { "type": "R", "before": "chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. For such models , there is typically a wide variation in temporal and other quantitative scales. In this multi-scale setting it is typically an extremely difficult task to perform approximations, such as Langevin approximations or law of large number type arguments, to simplify a system. Therefore, numerical methodsoftentimes are the only reasonable means by which such models can be understood in real time", "after": "chains. Simulation of such models is often relatively straightforward as there are easily implementable methods for the generation of exact sample paths. However, when using ensemble averages to approximate expected values, the computational complexity can become prohibitive as the number of computations per path scales linearly with the number of jumps, or reactions, of the process. When such methods become computationally intractable, approximate methods, which introduce a bias, can become advantageous", "start_char_pos": 192, "end_char_pos": 733 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 750, "end_char_pos": 750 }, { "type": "R", "before": "of", "after": ", or bias, induced by different", "start_char_pos": 815, "end_char_pos": 817 }, { "type": "R", "before": "in the multi-scale setting. We quantify how the error of three different methods depends upon", "after": ". The analysis takes into account", "start_char_pos": 853, "end_char_pos": 946 }, { "type": "R", "before": ", and with", "after": "and", "start_char_pos": 995, "end_char_pos": 1005 }, { "type": "D", "before": "we introduce a new algorithm in this setting,", "after": null, "start_char_pos": 1054, "end_char_pos": 1099 }, { "type": "R", "before": "algorithm, which was developed originally as an approximate method for diffusion processes, and prove that the leading order of the error process scales with the square of the time discretization", "after": "method is introduced in the current setting, and is proven to be second order accurate in a weak sense", "start_char_pos": 1121, "end_char_pos": 1316 }, { "type": "R", "before": "second", "after": "higher", "start_char_pos": 1339, "end_char_pos": 1345 }, { "type": "A", "before": null, "after": ". Examples are provided demonstrating both the main analytical results, and the reduction in computational complexity achieved with the approximate methods", "start_char_pos": 1375, "end_char_pos": 1375 } ]
[ 0, 99, 323, 420, 612, 735, 880, 1044 ]
1102.2922
3
The simplest, and most common, stochastic model for population processes, including those from biochemistry and cell biology, are continuous time Markov chains. Simulation of such models is often relatively straightforward as there are easily implementable methods for the generation of exact sample paths. However, when using ensemble averages to approximate expected values, the computational complexity can become prohibitive as the number of computations per path scales linearly with the number of jumps , or reactions, of the process. When such methods become computationally intractable, approximate methods, which introduce a bias, can become advantageous. In this paper, we provide a general framework for understanding the weak error, or bias, induced by different numerical approximation techniques . The analysis takes into account both the natural scalings within a given system and the step-size of the numerical method. Further, the weak trapezoidal method is introduced in the current setting, and is proven to be second order accurate in a weak sense, making it the first higher order method in this setting. Examples are provided demonstrating both the main analytical results , and the reduction in computational complexity achieved with the approximate methods.
The simplest, and most common, stochastic model for population processes, including those from biochemistry and cell biology, are continuous time Markov chains. Simulation of such models is often relatively straightforward as there are easily implementable methods for the generation of exact sample paths. However, when using ensemble averages to approximate expected values, the computational complexity can become prohibitive as the number of computations per path scales linearly with the number of jumps of the process. When such methods become computationally intractable, approximate methods, which introduce a bias, can become advantageous. In this paper, we provide a general framework for understanding the weak error, or bias, induced by different numerical approximation techniques in the current setting . The analysis takes into account both the natural scalings within a given system and the step-size of the numerical method. Examples are provided to demonstrate the main analytical results as well as the reduction in computational complexity achieved by the approximate methods.
[ { "type": "D", "before": ", or reactions,", "after": null, "start_char_pos": 509, "end_char_pos": 524 }, { "type": "A", "before": null, "after": "in the current setting", "start_char_pos": 810, "end_char_pos": 810 }, { "type": "D", "before": "Further, the weak trapezoidal method is introduced in the current setting, and is proven to be second order accurate in a weak sense, making it the first higher order method in this setting.", "after": null, "start_char_pos": 936, "end_char_pos": 1126 }, { "type": "R", "before": "demonstrating both", "after": "to demonstrate", "start_char_pos": 1149, "end_char_pos": 1167 }, { "type": "R", "before": ", and", "after": "as well as", "start_char_pos": 1196, "end_char_pos": 1201 }, { "type": "R", "before": "with", "after": "by", "start_char_pos": 1253, "end_char_pos": 1257 } ]
[ 0, 160, 306, 540, 664, 812, 935, 1126 ]
1102.2956
1
Cells use genetic switches to shift between alternate gene expression states, e.g., to adapt to new environments or to follow a preprogrammed developmental pathway. Here, we investigate the dynamics of switching between metastable states of a feedback based on/off switch , by employing the WKB theory to treat the underlying master equations. This technique, applicable for any generic feedback function, yields accurate results for the quasi-stationary distributions of mRNA and protein copy numbers and mean switching time, starting from either state. Our analytical results compare well with Monte Carlo simulations. Importantly, the approach may be used to study the effect of varying biological parameters on the stability of the switch states, and we use it to show that in some cases promoter kinetics, not just thermodynamic stability, can influence switching .
Cells use genetic switches to shift between alternate gene expression states, e.g., to adapt to new environments or to follow a developmental pathway. Here, we study the dynamics of switching in a generic-feedback on/off switch . Unlike protein-only models, we explicitly account for stochastic fluctuations of mRNA, which have a dramatic impact on switch dynamics. Employing the WKB theory to treat the underlying chemical master equations, we obtain accurate results for the quasi-stationary distributions of mRNA and protein copy numbers and for the mean switching time, starting from either state. Our analytical results agree well with Monte Carlo simulations. Importantly, one can use the approach to study the effect of varying biological parameters on switch stability .
[ { "type": "D", "before": "preprogrammed", "after": null, "start_char_pos": 128, "end_char_pos": 141 }, { "type": "R", "before": "investigate", "after": "study", "start_char_pos": 174, "end_char_pos": 185 }, { "type": "R", "before": "between metastable states of a feedback based", "after": "in a generic-feedback", "start_char_pos": 212, "end_char_pos": 257 }, { "type": "R", "before": ", by employing", "after": ". Unlike protein-only models, we explicitly account for stochastic fluctuations of mRNA, which have a dramatic impact on switch dynamics. Employing", "start_char_pos": 272, "end_char_pos": 286 }, { "type": "R", "before": "master equations. This technique, applicable for any generic feedback function, yields", "after": "chemical master equations, we obtain", "start_char_pos": 326, "end_char_pos": 412 }, { "type": "A", "before": null, "after": "for the", "start_char_pos": 506, "end_char_pos": 506 }, { "type": "R", "before": "compare", "after": "agree", "start_char_pos": 579, "end_char_pos": 586 }, { "type": "R", "before": "the approach may be used", "after": "one can use the approach", "start_char_pos": 635, "end_char_pos": 659 }, { "type": "R", "before": "the stability of the switch states, and we use it to show that in some cases promoter kinetics, not just thermodynamic stability, can influence switching", "after": "switch stability", "start_char_pos": 716, "end_char_pos": 869 } ]
[ 0, 164, 343, 555, 621 ]
1102.3218
1
This paper analyzes Least Squares Monte Carlo (LSM) algorithm, which is proposed by Longstaff and Schwartz (2001) for pricing American style securities. This algorithm is based on the projection of the value of continuation onto a certain set of basis functions via the least squares problem. We analyze the stability of the algorithm when the number of exercise dates increases and prove that if the underlying process for the stock price is continuous then the regression problem is ill-conditioned for small values of parameter t, time .
Consider Least Squares Monte Carlo (LSM) algorithm, which is proposed by Longstaff and Schwartz (2001) for pricing American style securities. This algorithm is based on the projection of the value of continuation onto a certain set of basis functions via the least squares problem. We analyze the stability of the algorithm when the number of exercise dates increases and prove that , if the underlying process for the stock price is continuous , then the regression problem is ill-conditioned for small values of the time parameter .
[ { "type": "R", "before": "This paper analyzes", "after": "Consider", "start_char_pos": 0, "end_char_pos": 19 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 394, "end_char_pos": 394 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 455, "end_char_pos": 455 }, { "type": "R", "before": "parameter t, time", "after": "the time parameter", "start_char_pos": 523, "end_char_pos": 540 } ]
[ 0, 152, 292 ]
1102.3232
1
Wireless sensor networks (WSNs) became one of the high-technology domains during the last years. Real-time applications for them make it necessary to provide the guaranteed Quality of Service (QoS). The main contributions of the paper are a service framework and a guaranteed QoS model that are suitable for the WSNs with some characteristics of the distribution, multi-hop, etc . To do it, we develop a sensor node model based on virtual buffer sharing and present a two-layer scheduling model by arrival curves and service curves in the network calculus. With this service framework , we develop a guaranteed QoS model, such as the upper bounds on buffer queue length , buffer queue delay, end-to-end delay/jitter and end-to-end effective bandwidth. Numerical results show that the service framework and guaranteed QoS model are highly scalable for different types of input flows, including the self-similar input flows, and they are only affected by the parameters of service curves and flow regulators . Our proposal leads to buffer dimensioning, guaranteed QoS support and control in WSNs.
Wireless sensor networks (WSNs) became one of the high technology domains during the last ten years. Real-time applications for them make it necessary to provide the guaranteed Quality of Service (QoS). The main contributions of this paper are a system skeleton and a guaranteed QoS model that are suitable for the WSNs . To do it, we develop a sensor node model based on virtual buffer sharing and present a two-layer scheduling model using the network calculus. With the system skeleton , we develop a guaranteed QoS model, such as the upper bounds on buffer queue length /delay/effective bandwidth, and single-hop/ multi-hops delay/jitter / effective bandwidth. Numerical results show the system skeleton and the guaranteed QoS model are scalable for different types of flows, including the self-similar traffic flows, and the parameters of flow regulators and service curves of sensor nodes affect them . Our proposal leads to buffer dimensioning, guaranteed QoS support and control in the WSNs.
[ { "type": "R", "before": "high-technology", "after": "high technology", "start_char_pos": 50, "end_char_pos": 65 }, { "type": "A", "before": null, "after": "ten", "start_char_pos": 90, "end_char_pos": 90 }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 226, "end_char_pos": 229 }, { "type": "R", "before": "service framework", "after": "system skeleton", "start_char_pos": 242, "end_char_pos": 259 }, { "type": "D", "before": "with some characteristics of the distribution, multi-hop, etc", "after": null, "start_char_pos": 318, "end_char_pos": 379 }, { "type": "R", "before": "by arrival curves and service curves in", "after": "using", "start_char_pos": 496, "end_char_pos": 535 }, { "type": "R", "before": "this service framework", "after": "the system skeleton", "start_char_pos": 563, "end_char_pos": 585 }, { "type": "R", "before": ", buffer queue delay, end-to-end", "after": "/delay/effective bandwidth, and single-hop/ multi-hops", "start_char_pos": 671, "end_char_pos": 703 }, { "type": "R", "before": "and end-to-end", "after": "/", "start_char_pos": 717, "end_char_pos": 731 }, { "type": "R", "before": "that the service framework and", "after": "the system skeleton and the", "start_char_pos": 776, "end_char_pos": 806 }, { "type": "D", "before": "highly", "after": null, "start_char_pos": 832, "end_char_pos": 838 }, { "type": "D", "before": "input", "after": null, "start_char_pos": 871, "end_char_pos": 876 }, { "type": "R", "before": "input", "after": "traffic", "start_char_pos": 911, "end_char_pos": 916 }, { "type": "D", "before": "they are only affected by", "after": null, "start_char_pos": 928, "end_char_pos": 953 }, { "type": "R", "before": "service curves and flow regulators", "after": "flow regulators and service curves of sensor nodes affect them", "start_char_pos": 972, "end_char_pos": 1006 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1090, "end_char_pos": 1090 } ]
[ 0, 97, 199, 381, 557, 752, 1008 ]
1102.3313
1
A new dynamical model is presented for chiral change in DNA molecules. The model is an extension of the conventional elastic model which incorporates the structure of base pairs and uses a spinor representation for the DNA configuration together with a gauge principle. Motivated by a recent experiment reporting chiral transitions between right-handed B-DNA and left-handed Z-DNA , we analyze the free energy for the particular case of linear DNA with an externally applied torque. The model shows that there exists, similar to one-dimensional magnetic systems, a phase transition at zero-temperature depending on the torque exerted on the DNA, which causes switching in B and Z domain sizes. This can explain the frequent switches of DNA extension observed in experiments.
A dynamical model is presented for chiral change in DNA molecules. The model is an extension of the conventional elastic model which incorporates the structure of base pairs and uses a spinor representation for the DNA configuration together with a gauge principle. Motivated by a recent experiment reporting chiral transitions between right-handed B-DNA and left-handed Z-DNA M. Lee, et. al., Proc. Natl. Acad. Sci. (USA) 107 , 4985 (2010) , we analyze the free energy for the particular case of linear DNA with an externally applied torque. The model shows that there exists, at low temperature, a rapid structural change depending on the torque exerted on the DNA, which causes switching in B and Z domain sizes. This can explain the frequent switches of DNA extension observed in experiments.
[ { "type": "D", "before": "new", "after": null, "start_char_pos": 2, "end_char_pos": 5 }, { "type": "A", "before": null, "after": "M. Lee, et. al., Proc. Natl. Acad. Sci. (USA) 107", "start_char_pos": 381, "end_char_pos": 381 }, { "type": "A", "before": null, "after": "4985 (2010)", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 385, "end_char_pos": 385 }, { "type": "R", "before": "similar to one-dimensional magnetic systems, a phase transition at zero-temperature", "after": "at low temperature, a rapid structural change", "start_char_pos": 521, "end_char_pos": 604 } ]
[ 0, 70, 269, 485, 696 ]
1102.3534
1
This paper introduces a new model risk measure based on hedging strategies to estimate model risk and provision calculation under uncertainty of volatility . This measure allows comparing different products and models (pricing hypothesis) under a homogeneous framework and conclude which one is the best. The model risk measure is defined in terms of the expected value and standard deviation of the loss given by the hedging strategy at a given time horizon . It has been assumed that the market volatility surface is driven by Heston 's dynamics calibrated to market for a given time horizon . The method is applied to estimate and compare model risk under volga-vanna and Black-Scholes models for double-no-touch options and a porfolio of forward fader options.
This paper introduces a relative model risk measure of a product priced with a given model, with respect to another reference model for which the market is assumed to be driven . This measure allows comparing products valued with different models (pricing hypothesis) under a homogeneous framework which allows concluding which model is the closest to the reference. The relative model risk measure is defined as the expected shortfall of the hedging strategy at a given time horizon for a chosen significance level. The reference model has been chosen to be Heston calibrated to market for a given time horizon (this reference model should be chosen to be a market proxy) . The method is applied to estimate and compare this relative model risk measure under volga-vanna and Black-Scholes models for double-no-touch options and a portfolio of forward fader options.
[ { "type": "R", "before": "new", "after": "relative", "start_char_pos": 24, "end_char_pos": 27 }, { "type": "R", "before": "based on hedging strategies to estimate model risk and provision calculation under uncertainty of volatility", "after": "of a product priced with a given model, with respect to another reference model for which the market is assumed to be driven", "start_char_pos": 47, "end_char_pos": 155 }, { "type": "R", "before": "different products and", "after": "products valued with different", "start_char_pos": 188, "end_char_pos": 210 }, { "type": "R", "before": "and conclude which one is the best. The", "after": "which allows concluding which model is the closest to the reference. The relative", "start_char_pos": 269, "end_char_pos": 308 }, { "type": "R", "before": "in terms of the expected value and standard deviation of the loss given by the", "after": "as the expected shortfall of the", "start_char_pos": 339, "end_char_pos": 417 }, { "type": "R", "before": ". It has been assumed that the market volatility surface is driven by Heston 's dynamics", "after": "for a chosen significance level. The reference model has been chosen to be Heston", "start_char_pos": 459, "end_char_pos": 547 }, { "type": "A", "before": null, "after": "(this reference model should be chosen to be a market proxy)", "start_char_pos": 594, "end_char_pos": 594 }, { "type": "R", "before": "model risk", "after": "this relative model risk measure", "start_char_pos": 643, "end_char_pos": 653 }, { "type": "R", "before": "porfolio", "after": "portfolio", "start_char_pos": 731, "end_char_pos": 739 } ]
[ 0, 157, 304, 460, 596 ]
1102.3739
1
Understanding design principles of molecular interaction networks is an important goal of molecular systems biology. Some insights have been gained into features of their network topology through the discovery of graph theoretic motifs that constrain network dynamics. This paper contributes to the identification of motifs in the mechanisms that govern network dynamics. The control of nodes in gene regulatory, signaling, and metabolic networks is governed by a variety of biochemical mechanisms, with inputs from other network nodes that act additively or synergistically. This paper focuses on a certain type of logical rule that appears frequently as a regulatory motif . Within the context of the multistate discrete model paradigm, a rule type is introduced that reduces to the concept of nested canalyzing function in the Boolean network case. It is shown that networks that employ this type of multivalued logic exhibit more robust dynamics than random networks, with few attractors and short limit cycles. It is also shown that the majority of regulatory functions in many published models of gene regulatory and signaling networks are nested canalyzing.
Understanding design principles of molecular interaction networks is an important goal of molecular systems biology. Some insights have been gained into features of their network topology through the discovery of graph theoretic patterns that constrain network dynamics. This paper contributes to the identification of patterns in the mechanisms that govern network dynamics. The control of nodes in gene regulatory, signaling, and metabolic networks is governed by a variety of biochemical mechanisms, with inputs from other network nodes that act additively or synergistically. This paper focuses on a certain type of logical rule that appears frequently as a regulatory pattern . Within the context of the multistate discrete model paradigm, a rule type is introduced that reduces to the concept of nested canalyzing function in the Boolean network case. It is shown that networks that employ this type of multivalued logic exhibit more robust dynamics than random networks, with few attractors and short limit cycles. It is also shown that the majority of regulatory functions in many published models of gene regulatory and signaling networks are nested canalyzing.
[ { "type": "R", "before": "motifs", "after": "patterns", "start_char_pos": 229, "end_char_pos": 235 }, { "type": "R", "before": "motifs", "after": "patterns", "start_char_pos": 317, "end_char_pos": 323 }, { "type": "R", "before": "motif", "after": "pattern", "start_char_pos": 669, "end_char_pos": 674 } ]
[ 0, 116, 268, 371, 575, 676, 851, 1015 ]
1102.3900
1
We consider a structural model for the estimation of credit risk based on Merton's original model. By using Random-Matrix theory we demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlation are not identical zero. The existence of correlations alters the tails of the loss distribution tremendously, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
We consider a structural model for the estimation of credit risk based on Merton's original model. By using Random Matrix Theory we demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution tremendously, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
[ { "type": "R", "before": "Random-Matrix theory", "after": "Random Matrix Theory", "start_char_pos": 108, "end_char_pos": 128 }, { "type": "R", "before": "correlation are not identical", "after": "correlations are not identically", "start_char_pos": 266, "end_char_pos": 295 } ]
[ 0, 98, 301, 418 ]
1102.3900
2
We consider a structural model for the estimation of credit risk based on Merton's original model . By using Random Matrix Theory we demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution tremendously , even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory . We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably , even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
[ { "type": "R", "before": "consider a structural model for the estimation of credit risk based on Merton's original model . By using", "after": "estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by", "start_char_pos": 3, "end_char_pos": 108 }, { "type": "R", "before": "we", "after": ". We", "start_char_pos": 130, "end_char_pos": 132 }, { "type": "R", "before": "tremendously", "after": "considerably", "start_char_pos": 378, "end_char_pos": 390 } ]
[ 0, 99, 305, 423 ]
1102.3928
1
The risk minimizing problem E[l((H-X_T^{x,\pi})^{+})]\pi{\longrightarrow}\min in the Black-Scholes framework with correlation is studied. General formulas for the minimal risk function and the cost reduction function for the option H depending on multiple underlying are derived. The case of a linear and a strictly convex loss function l are examined. Explicit computation for l(x)=x and l(x)=x^p, with p>1 for digital, quantos, outperformance and spread options are presented. The method is based on the quantile hedging approach presented in \mbox{%DIFAUXCMD FL10pt%DIFAUXCMD , \mbox{%DIFAUXCMD FL2 }0pt%DIFAUXCMD and developed for the multidimensional options in \mbox{%DIFAUXCMD Barski}0pt%DIFAUXCMD } .
The risk minimizing problem E[l((H-X_T^{x,\pi})^{+})]\pi{\longrightarrow}\min in the multidimensional Black-Scholes framework is studied. Specific formulas for the minimal risk function and the cost reduction function for basket derivatives are shown. Explicit integral representations for the risk functions for l(x)=x and l(x)=x^p, with p>1 for digital, quantos, outperformance and spread options are 0pt%DIFAUXCMD , \mbox{%DIFAUXCMD FL2 }0pt%DIFAUXCMD and developed for the multidimensional options in \mbox{%DIFAUXCMD Barski}0pt%DIFAUXCMD } derived .
[ { "type": "A", "before": null, "after": "multidimensional", "start_char_pos": 85, "end_char_pos": 85 }, { "type": "D", "before": "with correlation", "after": null, "start_char_pos": 110, "end_char_pos": 126 }, { "type": "R", "before": "General", "after": "Specific", "start_char_pos": 139, "end_char_pos": 146 }, { "type": "R", "before": "the option H depending on multiple underlying are derived. The case of a linear and a strictly convex loss function l are examined. Explicit computation for", "after": "basket derivatives are shown. Explicit integral representations for the risk functions for", "start_char_pos": 222, "end_char_pos": 378 }, { "type": "D", "before": "presented. The method is based on the quantile hedging approach presented in \\mbox{%DIFAUXCMD FL1", "after": null, "start_char_pos": 469, "end_char_pos": 566 }, { "type": "A", "before": null, "after": "derived", "start_char_pos": 708, "end_char_pos": 708 } ]
[ 0, 138, 280, 353, 479 ]
1102.4055
1
In this note we give, for a spectrally negative L\'evy process, a compact formula for the Parisian ruin probability, which is defined by the probability that the process exhibits an excursion below zero which length exceeds a certain fixed period r. The formula involves only the scale function of the spectrally negative L\'evy process and the distribution of the process at time r.
In this note we give, for a spectrally negative Levy process, a compact formula for the Parisian ruin probability, which is defined by the probability that the process exhibits an excursion below zero , with a length that exceeds a certain fixed period r. The formula involves only the scale function of the spectrally negative Levy process and the distribution of the process at time r.
[ { "type": "R", "before": "L\\'evy", "after": "Levy", "start_char_pos": 48, "end_char_pos": 54 }, { "type": "R", "before": "which length", "after": ", with a length that", "start_char_pos": 203, "end_char_pos": 215 }, { "type": "R", "before": "L\\'evy", "after": "Levy", "start_char_pos": 322, "end_char_pos": 328 } ]
[ 0, 249 ]
1102.5078
1
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is the well posedness of the optimization problem . We find examples in which this problem is well posed. Numerical experiments provide the efficient frontier . The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program . The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
[ { "type": "R", "before": "the well posedness of the optimization problem . We find examples in which this problem is well posed. Numerical experiments provide the efficient frontier", "after": "due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program", "start_char_pos": 178, "end_char_pos": 333 } ]
[ 0, 68, 144, 226, 280, 335 ]
1102.5314
1
We consider the problem of jointly optimizing channel pairing, channel-user assignment, and power allocation in a single-relay cooperative system with multiple channels and multiple users , under several common relaying strategies . Both weighted sum-rate and max-min rate are consideredas the optimization objective , and transmission power constraints are imposed on both individual transmitters and the aggregate over all transmitters. This joint optimization problem naturally leads to a mixed-integer program. Despite the general expectation that such problems are intractable, we construct an efficient algorithm to find an optimal solution, which incurs computational complexity that is polynomial in the number of channels and the number of users . The proposed solution is based on continuous relaxation, which usually only leads to heuristic or approximate solutions, but the rich structure in our problem renders it an exception. By observing the special structure of a three-dimensional assignment problem derived from the original problem, we show that the obtained solution is not only optimal, but also computationally efficient through judicious choices of the optimization trajectory . We further demonstrate through numerical experiments that the jointly optimal solution can significantly improve system performance over its suboptimal alternatives.
We consider the problem of jointly optimizing channel pairing, channel-user assignment, and power allocation , to maximize the weighted sum-rate, in a single-relay cooperative system with multiple channels and multiple users . Common relaying strategies are considered , and transmission power constraints are imposed on both individual transmitters and the aggregate over all transmitters. The joint optimization problem naturally leads to a mixed-integer program. Despite the general expectation that such problems are intractable, we construct an efficient algorithm to find an optimal solution, which incurs computational complexity that is polynomial in the number of channels and the number of users . We further demonstrate through numerical experiments that the jointly optimal solution can significantly improve system performance over its suboptimal alternatives.
[ { "type": "A", "before": null, "after": ", to maximize the weighted sum-rate,", "start_char_pos": 109, "end_char_pos": 109 }, { "type": "R", "before": ", under several common relaying strategies . Both weighted sum-rate and max-min rate are consideredas the optimization objective", "after": ". Common relaying strategies are considered", "start_char_pos": 189, "end_char_pos": 317 }, { "type": "R", "before": "This", "after": "The", "start_char_pos": 440, "end_char_pos": 444 }, { "type": "D", "before": ". The proposed solution is based on continuous relaxation, which usually only leads to heuristic or approximate solutions, but the rich structure in our problem renders it an exception. By observing the special structure of a three-dimensional assignment problem derived from the original problem, we show that the obtained solution is not only optimal, but also computationally efficient through judicious choices of the optimization trajectory", "after": null, "start_char_pos": 756, "end_char_pos": 1201 } ]
[ 0, 233, 439, 515, 757, 941, 1203 ]
1102.5457
1
We develop a theory for the market impact of large trading orders, which we call metaorders because they are typically split into small pieces and executed incrementally. Market impact is empirically observed to be a concave function of metaorder size, i.e. the impact per share of large metaorders is smaller than that of small metaorders. Within a framework in which informed traders are competitive we derive a fair pricing condition, which says that the average transaction price of the metaorder is equal to the price after trading is completed. We show that at equilibrium the distribution of trading volume adjusts to reflect information, and dictates the shape of the impact function. The resulting theory makes empirically testable predictions for the functional form of both the temporary and permanent components of market impact. Based on a commonly observed asymptotic distribution for the volume of large trades, it says that market impact should increase asymptotically roughly as the square root of size, with average permanent impact relaxing to about two thirds of peak impact.
We develop a theory for the market impact of large trading orders, which we call metaorders because they are typically split into small pieces and executed incrementally. Market impact is empirically observed to be a concave function of metaorder size, i.e. the impact per share of large metaorders is smaller than that of small metaorders. We formulate a stylized model of an algorithmic execution service and derive a fair pricing condition, which says that the average transaction price of the metaorder is equal to the price after trading is completed. We show that at equilibrium the distribution of trading volume adjusts to reflect information, and dictates the shape of the impact function. The resulting theory makes empirically testable predictions for the functional form of both the temporary and permanent components of market impact. Based on the commonly observed asymptotic distribution for the volume of large trades, it says that market impact should increase asymptotically roughly as the square root of size, with average permanent impact relaxing to about two thirds of peak impact.
[ { "type": "R", "before": "Within a framework in which informed traders are competitive we", "after": "We formulate a stylized model of an algorithmic execution service and", "start_char_pos": 341, "end_char_pos": 404 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 851, "end_char_pos": 852 } ]
[ 0, 170, 340, 550, 692, 841 ]
1103.0463
1
The performance of many common Internet applications can benefit from out-of-order delivery, a feature all IETF transportssince TCPhave included. Yet latency-sensitive applications still frequently build on in-order TCP despite its performance drawbacks, for reasons such as network compatibility and TCP's cultural inertia. We introduce uTCP, an API extension that adds out-of-order delivery support without changing TCP 's wire protocol, by delivering received TCP segments to the application immediately on arrival along with sequence number metadata. To obtain robust out-of-order delivery across middleboxes that may re-segment TCP flows, the application employs a "record-marking" content encoding such as COBS, allowing the receiver to extract records from a byte stream with arbitrary holes. TLS can also serve as such an encoding, enabling applications to obtain out-of-order delivery in a stream indistinguishable in the network from conventional TLS over TCP . With uTCP, for example, voice/videoconferencing applications can obtain performance comparable to that of UDP-based operation, even when forced to tunnel over TCP-based HTTP or HTTPS connections for network compatibility reasons .
Internet applications increasingly employ TCP not as a stream abstraction, but as a substrate for application-level transports, a use that converts TCP's in-order semantics from a convenience blessing to a performance curse. As Internet evolution makes TCP's use as a substrate likely to grow, we offer Minion, an architecture for backward-compatible out-of-order delivery atop TCP and TLS. Small OS API extensions allow applications to manage TCP's send buffer and to receive TCP segments out-of-order . Atop these extensions, Minion builds application-level protocols offering true unordered datagram delivery, within streams preserving strict wire-compatibility with unsecured or TLS-secured TCP connections. Minion's protocols can run on unmodified TCP stacks, but benefit incrementally when either endpoint is upgraded, for a backward-compatible deployment path. Experiments suggest that Minion can noticeably improve the performance of applications such as conferencing, virtual private networking, and web browsing, while incurring minimal CPU or bandwidth costs .
[ { "type": "R", "before": "The performance of many common Internet applications can benefit from out-of-order delivery, a feature all IETF transportssince TCPhave included. Yet latency-sensitive applications still frequently build on", "after": "Internet applications increasingly employ TCP not as a stream abstraction, but as a substrate for application-level transports, a use that converts TCP's", "start_char_pos": 0, "end_char_pos": 206 }, { "type": "R", "before": "TCP despite its performance drawbacks, for reasons such as network compatibility and", "after": "semantics from a convenience blessing to a performance curse. As Internet evolution makes", "start_char_pos": 216, "end_char_pos": 300 }, { "type": "R", "before": "cultural inertia. We introduce uTCP, an API extension that adds", "after": "use as a substrate likely to grow, we offer Minion, an architecture for backward-compatible", "start_char_pos": 307, "end_char_pos": 370 }, { "type": "R", "before": "support without changing TCP 's wire protocol, by delivering received TCP segments to the application immediately on arrival along with sequence number metadata. To obtain robust", "after": "atop TCP and TLS. Small OS API extensions allow applications to manage TCP's send buffer and to receive TCP segments", "start_char_pos": 393, "end_char_pos": 571 }, { "type": "R", "before": "delivery across middleboxes that may re-segment TCP flows, the application employs a \"record-marking\" content encoding such as COBS, allowing the receiver to extract records from a byte stream with arbitrary holes. TLS can also serve as such an encoding, enabling applications to obtain out-of-order delivery in a stream indistinguishable in the network from conventional TLS over TCP . With uTCP, for example, voice/videoconferencing applications can obtain performance comparable to that of UDP-based operation, even when forced to tunnel over TCP-based HTTP or HTTPS connections for network compatibility reasons", "after": ". Atop these extensions, Minion builds application-level protocols offering true unordered datagram delivery, within streams preserving strict wire-compatibility with unsecured or TLS-secured TCP connections. Minion's protocols can run on unmodified TCP stacks, but benefit incrementally when either endpoint is upgraded, for a backward-compatible deployment path. Experiments suggest that Minion can noticeably improve the performance of applications such as conferencing, virtual private networking, and web browsing, while incurring minimal CPU or bandwidth costs", "start_char_pos": 585, "end_char_pos": 1200 } ]
[ 0, 145, 324, 554, 799 ]
1103.0717
1
The social role of any company is to get the maximum profitability with the less risk. Due to Basel III, banks should now raise their minimum capital levels on an individual basis, with the aim of lowering the probability for a large crash to occur. Such implementation assumes that with higher minimum capital levels it becomes more probable that the value of the assets drop bellow the minimum level and consequently expects the number of bank defaults to drop also. We present evidence that in such new financial reality large crashes are avoid only if one assumes that banks will accept quietly the drop of business levels, which is counter-nature. Our perspective steams from statistical physics and gives hints for improving bank system resilience. Stock markets exhibit critical behavior and scaling features, showing a power-law for the amplitude of financial crisis. By modeling a financial network where critical behavior naturally emerges it is possible to show that bank system resilience is not favored by raising the levels of capital. Due to the complex nature of the financial network, only the probability of bank default is affected and not the magnitude of a money market crisis. Further, assuming that banks will try to restore business levels, raising diversification and lowering their individual risk, the dimension of the entire financial network will increase, which has the natural consequence of raising the probability of large crisis .
We address the problem of banking system resilience by applying off-equilibrium statistical physics to a system of particles, representing the economic agents, modelled according to the theoretical foundation of the current banking regulation, the so called Merton-Vasicek model. Economic agents are attracted to each other to exchange `economic energy', forming a network of trades. When the capital level of one economic agent drops below a minimum, the economic agent becomes insolvent. The insolvency of one single economic agent affects the economic energy of all its neighbours which thus become susceptible to insolvency, being able to trigger a chain of insolvencies (avalanche). We show that the distribution of avalanche sizes follows a power-law whose exponent depends on the minimum capital level. Furthermore, we present evidence that under an increase in the minimum capital level, large crashes will be avoided only if one assumes that agents will accept a drop in business levels, while keeping their trading attitudes and policies unchanged. The alternative assumption, that agents will try to restore their business levels, may lead to the unexpected consequence that large crises occur with higher probability .
[ { "type": "R", "before": "The social role of any company is to get the maximum profitability with the less risk. Due to Basel III, banks should now raise their minimum capital levels on an individual basis, with the aim of lowering the probability for a large crash to occur. Such implementation assumes that with higher minimum capital levels it becomes more probable that the value of", "after": "We address the problem of banking system resilience by applying off-equilibrium statistical physics to a system of particles, representing the economic agents, modelled according to the theoretical foundation of the current banking regulation, the so called Merton-Vasicek model. Economic agents are attracted to each other to exchange `economic energy', forming a network of trades. When the capital level of one economic agent drops below a minimum,", "start_char_pos": 0, "end_char_pos": 360 }, { "type": "R", "before": "assets drop bellow the minimum level and consequently expects the number of bank defaults to drop also. We", "after": "economic agent becomes insolvent. The insolvency of one single economic agent affects the economic energy of all its neighbours which thus become susceptible to insolvency, being able to trigger a chain of insolvencies (avalanche). We show that the distribution of avalanche sizes follows a power-law whose exponent depends on the minimum capital level. Furthermore, we", "start_char_pos": 365, "end_char_pos": 471 }, { "type": "R", "before": "in such new financial reality large crashes are avoid", "after": "under an increase in the minimum capital level, large crashes will be avoided", "start_char_pos": 494, "end_char_pos": 547 }, { "type": "R", "before": "banks will accept quietly the drop of", "after": "agents will accept a drop in", "start_char_pos": 573, "end_char_pos": 610 }, { "type": "R", "before": "which is counter-nature. Our perspective steams from statistical physics and gives hints for improving bank system resilience. Stock markets exhibit critical behavior and scaling features, showing a power-law for the amplitude of financial crisis. By modeling a financial network where critical behavior naturally emerges it is possible to show that bank system resilience is not favored by raising the levels of capital. Due to the complex nature of the financial network, only the probability of bank default is affected and not the magnitude of a money market crisis. Further, assuming that banks", "after": "while keeping their trading attitudes and policies unchanged. The alternative assumption, that agents", "start_char_pos": 628, "end_char_pos": 1227 }, { "type": "A", "before": null, "after": "their", "start_char_pos": 1248, "end_char_pos": 1248 }, { "type": "R", "before": "raising diversification and lowering their individual risk, the dimension of the entire financial network will increase, which has the natural consequence of raising the probability of large crisis", "after": "may lead to the unexpected consequence that large crises occur with higher probability", "start_char_pos": 1266, "end_char_pos": 1463 } ]
[ 0, 86, 249, 468, 652, 754, 875, 1049, 1198 ]
1103.1165
1
We study the problem of super-replication for game options under proportional transaction costs. We consider a multidimensional model which is an extension of the usual Black-Scholes (BS) model, in the sense that the volatility is a progressively measurable function of the stock . For this case we show that the super-replication price is the cheapest cost of a trivial super-replication strategy. This result is an extension of previous papers (see [ 1 ] , 2%DIFDELCMD < ]%%% , 10 and [ 11 ]) in which only European options with Markovian structure were considered. In [ 4 ] and 5%DIFDELCMD < ] %%% the authors suggested a purely probabilistic approach which is based on the Skorohod embedding and does not require a Markovian structure, but is limited to the one dimensional case. In this paper we propose another purely probabilistic approach which is based on the unpublished manuscript 7 .
We study the problem of super-replication for game options under proportional transaction costs. We consider a multidimensional continuous time model, in which the discounted stock price process satisfies the conditional full support property. We show that the super-replication price is the cheapest cost of a trivial super-replication strategy. This result is an extension of previous papers (see [ 3 ] %DIFDELCMD < ]%%% and [ 7 ]) which considered only European options . In these papers the authors showed that with the presence of proportional transaction costs the super--replication price of a European option is given in terms of the concave envelope of the payoff function. In the present work we prove that for game options the super-replication price is given by a game variant analog of the standard concave envelope term. The treatment of game options is more complicated and requires additional tools. We combine the theory of consistent price systems together with the theory of extended weak convergence which was developed in [ 1 ] %DIFDELCMD < ] %%% . The second theory is essential in dealing with hedging which involves stopping times, like in the case of game options .
[ { "type": "R", "before": "model which is an extension of the usual Black-Scholes (BS) model, in the sense that the volatility is a progressively measurable function of the stock . For this case we", "after": "continuous time model, in which the discounted stock price process satisfies the conditional full support property. We", "start_char_pos": 128, "end_char_pos": 298 }, { "type": "R", "before": "1", "after": "3", "start_char_pos": 453, "end_char_pos": 454 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 457, "end_char_pos": 458 }, { "type": "D", "before": "2", "after": null, "start_char_pos": 459, "end_char_pos": 460 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 478, "end_char_pos": 479 }, { "type": "D", "before": "10", "after": null, "start_char_pos": 480, "end_char_pos": 482 }, { "type": "R", "before": "11", "after": "7", "start_char_pos": 489, "end_char_pos": 491 }, { "type": "R", "before": "in which", "after": "which considered", "start_char_pos": 495, "end_char_pos": 503 }, { "type": "R", "before": "with Markovian structure were considered. In", "after": ". In these papers the authors showed that with the presence of proportional transaction costs the super--replication price of a European option is given in terms of the concave envelope of the payoff function. In the present work we prove that for game options the super-replication price is given by a game variant analog of the standard concave envelope term. The treatment of game options is more complicated and requires additional tools. We combine the theory of consistent price systems together with the theory of extended weak convergence which was developed in", "start_char_pos": 526, "end_char_pos": 570 }, { "type": "R", "before": "4", "after": "1", "start_char_pos": 573, "end_char_pos": 574 }, { "type": "D", "before": "and", "after": null, "start_char_pos": 577, "end_char_pos": 580 }, { "type": "D", "before": "5", "after": null, "start_char_pos": 581, "end_char_pos": 582 }, { "type": "D", "before": "the authors suggested a purely probabilistic approach which is based on the Skorohod embedding and does not require a Markovian structure, but is limited to the one dimensional case. In this paper we propose another purely probabilistic approach which is based on the unpublished manuscript", "after": null, "start_char_pos": 601, "end_char_pos": 891 }, { "type": "R", "before": "7", "after": ". The second theory is essential in dealing with hedging which involves stopping times, like in the case of game options", "start_char_pos": 892, "end_char_pos": 893 } ]
[ 0, 96, 281, 398, 567, 783 ]
1103.1243
1
The international trade network (ITN) has received renewed multidisciplinary interest due to recent advances in network theory. However, it is still unclear whether a network approach conveys additional, nontrivial information with respect to traditional international-economics analyses that describe world trade only in terms of local (first-order) properties. In this and in a companion paper ] , we employ a recently-proposed randomization method to assess in detail the role that local properties have in shaping higher-order patterns of the ITN in all its possible representations (binary/weighted, directed/undirected, aggregated/disaggregated) and across several years. Here we show that, remarkably, all the properties of all binary projections of the network can be completely traced back to the degree sequence, which is therefore maximally informative. Our results imply that explaining the observed degree sequence of the ITN, which has not received particular attention in economic theory, should instead become one the main focuses of models of trade.
The international trade network (ITN) has received renewed multidisciplinary interest due to recent advances in network theory. However, it is still unclear whether a network approach conveys additional, nontrivial information with respect to traditional international-economics analyses that describe world trade only in terms of local (first-order) properties. In this and in a companion paper (see arXiv:1103.1249 physics.soc-ph]) , we employ a recently-proposed randomization method to assess in detail the role that local properties have in shaping higher-order patterns of the ITN in all its possible representations (binary/weighted, directed/undirected, aggregated/disaggregated) and across several years. Here we show that, remarkably, all the properties of all binary projections of the network can be completely traced back to the degree sequence, which is therefore maximally informative. Our results imply that explaining the observed degree sequence of the ITN, which has not received particular attention in economic theory, should instead become one the main focuses of models of trade.
[ { "type": "A", "before": null, "after": "(see arXiv:1103.1249", "start_char_pos": 396, "end_char_pos": 396 }, { "type": "A", "before": null, "after": "physics.soc-ph", "start_char_pos": 397, "end_char_pos": 397 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 398, "end_char_pos": 398 } ]
[ 0, 127, 362, 678, 865 ]
1103.1243
2
The international trade network (ITN) has received renewed multidisciplinary interest due to recent advances in network theory. However, it is still unclear whether a network approach conveys additional, nontrivial information with respect to traditional international-economics analyses that describe world trade only in terms of local (first-order) properties. In this and in a companion paper (see arXiv:1103.1249 physics.soc-ph%DIFDELCMD < ]%%% ) , we employ a recently-proposed randomization method to assess in detail the role that local properties have in shaping higher-order patterns of the ITN in all its possible representations (binary/weighted, directed/undirected, aggregated/disaggregated ) and across several years. Here we show that, remarkably, all the properties of all binary projections of the network can be completely traced back to the degree sequence, which is therefore maximally informative. Our results imply that explaining the observed degree sequence of the ITN, which has not received particular attention in economic theory, should instead become one the main focuses of models of trade.
The international trade network (ITN) has received renewed multidisciplinary interest due to recent advances in network theory. However, it is still unclear whether a network approach conveys additional, nontrivial information with respect to traditional international-economics analyses that describe world trade only in terms of local (first-order) properties. In this and in a companion paper %DIFDELCMD < ]%%% , we employ a recently proposed randomization method to assess in detail the role that local properties have in shaping higher-order patterns of the ITN in all its possible representations (binary/weighted, directed/undirected, aggregated/disaggregated by commodity ) and across several years. Here we show that, remarkably, the properties of all binary projections of the network can be completely traced back to the degree sequence, which is therefore maximally informative. Our results imply that explaining the observed degree sequence of the ITN, which has not received particular attention in economic theory, should instead become one the main focuses of models of trade.
[ { "type": "D", "before": "(see arXiv:1103.1249", "after": null, "start_char_pos": 396, "end_char_pos": 416 }, { "type": "D", "before": "physics.soc-ph", "after": null, "start_char_pos": 417, "end_char_pos": 431 }, { "type": "D", "before": ")", "after": null, "start_char_pos": 449, "end_char_pos": 450 }, { "type": "R", "before": "recently-proposed", "after": "recently proposed", "start_char_pos": 465, "end_char_pos": 482 }, { "type": "A", "before": null, "after": "by commodity", "start_char_pos": 704, "end_char_pos": 704 }, { "type": "D", "before": "all", "after": null, "start_char_pos": 764, "end_char_pos": 767 } ]
[ 0, 127, 362, 732, 919 ]
1103.1249
1
In this sequel to a companion paper ] , we complement our analysis of the binary projections of the International Trade Network (ITN) by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed/undirected, aggregated/disaggregated) cannot be traced back to local structural properties, which are therefore of limited informativeness. Our results highlight that any topological property representing only partial information (e.g., degree sequences) cannot in general be obtained from the corresponding weighted property (e.g., strength sequences). Therefore the expectation that weighted structural properties offer a more complete description than purely topological ones is misleading. Our analysis of the ITN detects indirect effects that are not captured by traditional macroeconomic analyses focused only on weighted first-order country-specific properties, and highlights the limitations of models and theories that overemphasize the need to reproduce and explain such properties.
In this sequel to a companion paper (see arXiv:1103.1243 physics.soc-ph]) , we complement our analysis of the binary projections of the International Trade Network (ITN) by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed/undirected, aggregated/disaggregated) cannot be traced back to local structural properties, which are therefore of limited informativeness. Our results highlight that any topological property representing only partial information (e.g., degree sequences) cannot in general be obtained from the corresponding weighted property (e.g., strength sequences). Therefore the expectation that weighted structural properties offer a more complete description than purely topological ones is misleading. Our analysis of the ITN detects indirect effects that are not captured by traditional macroeconomic analyses focused only on weighted first-order country-specific properties, and highlights the limitations of models and theories that overemphasize the need to reproduce and explain such properties.
[ { "type": "A", "before": null, "after": "(see arXiv:1103.1243", "start_char_pos": 36, "end_char_pos": 36 }, { "type": "A", "before": null, "after": "physics.soc-ph", "start_char_pos": 37, "end_char_pos": 37 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 38, "end_char_pos": 38 } ]
[ 0, 179, 416, 630, 770 ]
1103.1249
2
In this sequel to a companion paper (see arXiv:1103.1243 physics.soc-ph%DIFDELCMD < ]%%% ) , we complement our analysis of the binary projections of the International Trade Network (ITN ) by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed/undirected, aggregated/disaggregated) cannot be traced back to local structural properties, which are therefore of limited informativeness. Our results highlight that any topological property representing only partial information (e.g., degree sequences) cannot in general be obtained from the corresponding weighted property (e.g., strength sequences). Therefore the expectation that weighted structural properties offer a more complete description than purely topological ones is misleading. Our analysis of the ITN detects indirect effects that are not captured by traditional macroeconomic analyses focused only on weighted first-order country-specific properties, and highlights the limitations of models and theories that overemphasize the need to reproduce and explain such properties .
%DIFDELCMD < ]%%% Based on the misleading expectation that weighted network properties always offer a more complete description than purely topological ones, current economic models of the International Trade Network (ITN) generally aim at explaining local weighted properties, not local binary ones. Here we complement our analysis of the binary projections of the ITN by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed/undirected, aggregated/disaggregated) cannot be traced back to local country-specific properties, which are therefore of limited informativeness. Our two papers show that traditional macroeconomic approaches systematically fail to capture the key properties of the ITN . In the binary case, they do not focus on the degree sequence and hence cannot characterize or replicate higher-order properties. In the weighted case, they generally focus on the strength sequence, but the knowledge of the latter is not enough in order to understand or reproduce indirect effects .
[ { "type": "D", "before": "In this sequel to a companion paper (see arXiv:1103.1243", "after": null, "start_char_pos": 0, "end_char_pos": 56 }, { "type": "D", "before": "physics.soc-ph", "after": null, "start_char_pos": 57, "end_char_pos": 71 }, { "type": "R", "before": ") ,", "after": "Based on the misleading expectation that weighted network properties always offer a more complete description than purely topological ones, current economic models of the International Trade Network (ITN) generally aim at explaining local weighted properties, not local binary ones. Here", "start_char_pos": 89, "end_char_pos": 92 }, { "type": "R", "before": "International Trade Network (ITN )", "after": "ITN", "start_char_pos": 153, "end_char_pos": 187 }, { "type": "R", "before": "structural", "after": "country-specific", "start_char_pos": 399, "end_char_pos": 409 }, { "type": "R", "before": "results highlight that any topological property representing only partial information (e.g., degree sequences) cannot in general be obtained from the corresponding weighted property (e.g., strength sequences). Therefore the expectation that weighted structural properties offer a more complete description than purely topological ones is misleading. Our analysis", "after": "two papers show that traditional macroeconomic approaches systematically fail to capture the key properties", "start_char_pos": 474, "end_char_pos": 836 }, { "type": "R", "before": "detects indirect effects that are not captured by traditional macroeconomic analyses focused only on weighted first-order country-specific properties, and highlights the limitations of models and theories that overemphasize the need to reproduce and explain such properties", "after": ". In the binary case, they do not focus on the degree sequence and hence cannot characterize or replicate higher-order properties. In the weighted case, they generally focus on the strength sequence, but the knowledge of the latter is not enough in order to understand or reproduce indirect effects", "start_char_pos": 848, "end_char_pos": 1121 } ]
[ 0, 232, 469, 683, 823 ]
1103.1402
1
Robust advances in interactome analysis demand comprehensive, non-redundant , consistently annotated datasets. To enable efficient retrieval, annotation and exchange of protein-protein interaction data, proteomics community has been developing infrastructures under Proteomics Standards Initiative and IMEx consortium. However, there is presently no resource that would provide all features necessary to construct reference interaction networks in URLanisms. The recently developed iRefIndex databaseincludes interactions from most popular repositories with a standardized protein nomenclature . We have developed ppiTrim, a script that can process iRefIndex to produce a consolidated dataset of physical interactions for, in principle, every URLanism. For the current publication we processed only the three largest datasets: yeast, human and fruitfly. ppiTrim maps interactants to NCBI Gene IDs, deflates possibly spoke-expanded complexes and reconciles annotation labels between different source databases. Our results indicate that ppiTrim is able to significantly reduce the complexity of interaction datasets. URL: ftp://ftp.ncbi.nlm.nih.gov/pub/qmbpmn/ppiTrim/
Robust advances in interactome analysis demand comprehensive, non-redundant and consistently annotated datasets. By non-redundant, we mean that the accounting of evidence for every interaction should be faithful: each independent experimental support is counted exactly once, no more, no less. While many interactions are shared among public repositories, none of them contains the complete known interactome for any URLanism. In addition, the annotations of the same experimental result by different repositories often disagree. This brings up the issue of which annotation to keep while consolidating evidences that are the same. The iRefIndex database, including interactions from most popular repositories with a standardized protein nomenclature , represents a significant advance in all aspects, especially in comprehensiveness. However, iRefIndex aims to maintain all information/annotation from original sources and requires users to perform additional processing to fully achieve the aforementioned goals. To address issues with iRefIndex and to achieve our goals, we have developed ppiTrim, a script that processes iRefIndex to produce non-redundant, consistently annotated datasets of physical interactions . Our script proceeds in three stages: mapping all interactants to gene identifiers and removing all undesired raw interactions, deflating potentially expanded complexes, and reconciling for each interaction the annotation labels among different source databases. As an illustration, we have processed the three URLanismal datasets: yeast, human and fruitfly. While ppiTrim can resolve most apparent conflicts between different labelings, we also discovered some unresolvable disagreements mostly resulting from different annotation policies among repositories. URL: URL
[ { "type": "R", "before": ",", "after": "and", "start_char_pos": 76, "end_char_pos": 77 }, { "type": "R", "before": "To enable efficient retrieval, annotation and exchange of protein-protein interaction data, proteomics community has been developing infrastructures under Proteomics Standards Initiative and IMEx consortium. However, there is presently no resource that would provide all features necessary to construct reference interaction networks in URLanisms. The recently developed iRefIndex databaseincludes", "after": "By non-redundant, we mean that the accounting of evidence for every interaction should be faithful: each independent experimental support is counted exactly once, no more, no less. While many interactions are shared among public repositories, none of them contains the complete known interactome for any URLanism. In addition, the annotations of the same experimental result by different repositories often disagree. This brings up the issue of which annotation to keep while consolidating evidences that are the same. The iRefIndex database, including", "start_char_pos": 111, "end_char_pos": 508 }, { "type": "R", "before": ". We", "after": ", represents a significant advance in all aspects, especially in comprehensiveness. However, iRefIndex aims to maintain all information/annotation from original sources and requires users to perform additional processing to fully achieve the aforementioned goals. To address issues with iRefIndex and to achieve our goals, we", "start_char_pos": 594, "end_char_pos": 598 }, { "type": "R", "before": "can process", "after": "processes", "start_char_pos": 637, "end_char_pos": 648 }, { "type": "R", "before": "a consolidated dataset", "after": "non-redundant, consistently annotated datasets", "start_char_pos": 670, "end_char_pos": 692 }, { "type": "R", "before": "for, in principle, every URLanism. For the current publication we processed only the three largest", "after": ". Our script proceeds in three stages: mapping all interactants to gene identifiers and removing all undesired raw interactions, deflating potentially expanded complexes, and reconciling for each interaction the annotation labels among different source databases. As an illustration, we have processed the three URLanismal", "start_char_pos": 718, "end_char_pos": 816 }, { "type": "R", "before": "ppiTrim maps interactants to NCBI Gene IDs, deflates possibly spoke-expanded complexes and reconciles annotation labels between different source databases. Our results indicate that ppiTrim is able to significantly reduce the complexity of interaction datasets. URL: ftp://ftp.ncbi.nlm.nih.gov/pub/qmbpmn/ppiTrim/", "after": "While ppiTrim can resolve most apparent conflicts between different labelings, we also discovered some unresolvable disagreements mostly resulting from different annotation policies among repositories. URL: URL", "start_char_pos": 854, "end_char_pos": 1167 } ]
[ 0, 110, 318, 458, 595, 752, 1009, 1115 ]
1103.1652
1
We formulate a model of utility for a continuous time framework that captures the decision-maker's concern with ambiguity or model uncertainty. The main novelty is in the range of model uncertainty that is accommodated. The probability measures entertained by the decision-maker are not assumed to be equivalent to a fixed reference measure and thus the model permits ambiguity about which scenarios are possible. Modeling ambiguity about volatility is a prime motivation and a major focus. Implications for asset returns are derived in representative agent frameworks, in both Arrow-Debreu style economies and in sequential Radner-style economies .
We formulate a model of utility for a continuous time framework that captures the decision-maker's concern with ambiguity or model uncertainty. The main novelty is in the range of model uncertainty that is accommodated. The probability measures entertained by the decision-maker are not assumed to be equivalent to a fixed reference measure and thus the model permits ambiguity about which scenarios are possible. Modeling ambiguity about volatility is a prime motivation and a major focus. A motivating application is the extension of asset pricing theory to incorporate ambiguity about volatility and possibility. The paper provides some initial steps in this direction by deriving equilibrium relations for asset returns in a representative agent framework, by deriving hedging bounds for asset prices, and by showing the pivotal role of `state prices' in both the equilibrium and no-arbitrage analyses .
[ { "type": "R", "before": "Implications", "after": "A motivating application is the extension of asset pricing theory to incorporate ambiguity about volatility and possibility. The paper provides some initial steps in this direction by deriving equilibrium relations", "start_char_pos": 491, "end_char_pos": 503 }, { "type": "D", "before": "are derived in representative agent frameworks,", "after": null, "start_char_pos": 522, "end_char_pos": 569 }, { "type": "R", "before": "both Arrow-Debreu style economies and in sequential Radner-style economies", "after": "a representative agent framework, by deriving hedging bounds for asset prices, and by showing the pivotal role of `state prices' in both the equilibrium and no-arbitrage analyses", "start_char_pos": 573, "end_char_pos": 647 } ]
[ 0, 143, 219, 413, 490 ]
1103.1652
2
We formulate a model of utility for a continuous time framework that captures the decision-maker's concern with ambiguity or model uncertainty. The main novelty is in the range of model uncertainty that is accommodated. The probability measures entertained by the decision-maker are not assumed to be equivalent to a fixed reference measure and thus the model permits ambiguity about which scenarios are possible. Modeling ambiguity about volatility is a prime motivation and a major focus. A motivating application is the extension of asset pricing theory to incorporate ambiguity about volatility and possibility. The paper provides some initial steps in this direction by deriving equilibrium relations for asset returns in a representative agent framework, by deriving hedging bounds for asset prices, and by showing the pivotal role of `state prices' in both the equilibrium and no-arbitrage analyses .
This paper formulates a model of utility for a continuous time framework that captures the decision-maker's concern with ambiguity about both the drift and volatility of the driving process. At a technical level, the analysis requires a significant departure from existing continuous time modeling because it cannot be done within a probability space framework. This is because ambiguity about volatility leads invariably to a set of nonequivalent priors, that is, to priors that disagree about which scenarios are possible .
[ { "type": "R", "before": "We formulate", "after": "This paper formulates", "start_char_pos": 0, "end_char_pos": 12 }, { "type": "R", "before": "or model uncertainty. The main novelty is in the range of model uncertainty that is accommodated. The probability measures entertained by the decision-maker are not assumed to be equivalent to a fixed reference measure and thus the model permits ambiguity about which scenarios are possible. Modeling ambiguity about volatility is a prime motivation and a major focus. A motivating application is the extension of asset pricing theory to incorporate", "after": "about both the drift and volatility of the driving process. At a technical level, the analysis requires a significant departure from existing continuous time modeling because it cannot be done within a probability space framework. This is because", "start_char_pos": 122, "end_char_pos": 571 }, { "type": "R", "before": "and possibility. The paper provides some initial steps in this direction by deriving equilibrium relations for asset returns in a representative agent framework, by deriving hedging bounds for asset prices, and by showing the pivotal role of `state prices' in both the equilibrium and no-arbitrage analyses", "after": "leads invariably to a set of nonequivalent priors, that is, to priors that disagree about which scenarios are possible", "start_char_pos": 599, "end_char_pos": 905 } ]
[ 0, 143, 219, 413, 490, 615 ]
1103.1755
1
We formulate an optimal stopping problem where the probability scale is distorted by a general nonlinear function. The problem is inherently time inconsistent due to the Choquet integration involved. We develop a new approach, based on a reformulation of the problem where one optimally chooses the probability distribution or quantile function of the stopped state. An optimal stopping time can then be recovered from the obtained distribution/quantile function via the Skorokhod embedding. This approach enables us to solve the problem in a fairly general manner with different shapes of the payoff and probability distortion functions. In particular, we show that the optimality of the exit time of an interval (corresponding to the "cut-loss-or-stop-gain" strategy widely adopted in stock trading ) is endogenous for problems with convex distortion functions, including ones where distortion is absent. We also discuss economical interpretations of the results .
We formulate an optimal stopping problem for a geometric Brownian motion where the probability scale is distorted by a general nonlinear function. The problem is inherently time inconsistent due to the Choquet integration involved. We develop a new approach, based on a reformulation of the problem where one optimally chooses the probability distribution or quantile function of the stopped state. An optimal stopping time can then be recovered from the obtained distribution/quantile function , either in a straightforward way for several important cases or in general via the Skorokhod embedding. This approach enables us to solve the problem in a fairly general manner with different shapes of the payoff and probability distortion functions. We also discuss economical interpretations of the results. In particular, we justify several liquidation strategies widely adopted in stock trading , including those of "buy and hold", "cut loss or take profit", "cut loss and let profit run" and "sell on a percentage of historical high" .
[ { "type": "A", "before": null, "after": "for a geometric Brownian motion", "start_char_pos": 41, "end_char_pos": 41 }, { "type": "A", "before": null, "after": ", either in a straightforward way for several important cases or in general", "start_char_pos": 464, "end_char_pos": 464 }, { "type": "A", "before": null, "after": "We also discuss economical interpretations of the results.", "start_char_pos": 641, "end_char_pos": 641 }, { "type": "R", "before": "show that the optimality of the exit time of an interval (corresponding to the \"cut-loss-or-stop-gain\" strategy", "after": "justify several liquidation strategies", "start_char_pos": 660, "end_char_pos": 771 }, { "type": "R", "before": ") is endogenous for problems with convex distortion functions, including ones where distortion is absent. We also discuss economical interpretations of the results", "after": ", including those of \"buy and hold\", \"cut loss or take profit\", \"cut loss and let profit run\" and \"sell on a percentage of historical high\"", "start_char_pos": 804, "end_char_pos": 967 } ]
[ 0, 115, 200, 367, 493, 640, 909 ]
1103.2273
1
Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we review an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a "concept web" or "semantic network" except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other ologs . We consider a simple example of an alpha-helical and an amyloid-like protein filament subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples reviewed here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine.
Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we describe an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a "concept web" or "semantic network" except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other olog . We consider simple cases of alpha-helical and amyloid-like protein filaments subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples presented here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine.
[ { "type": "R", "before": "review", "after": "describe", "start_char_pos": 158, "end_char_pos": 164 }, { "type": "R", "before": "ologs", "after": "olog", "start_char_pos": 604, "end_char_pos": 609 }, { "type": "R", "before": "a simple example of an", "after": "simple cases of", "start_char_pos": 624, "end_char_pos": 646 }, { "type": "D", "before": "an", "after": null, "start_char_pos": 665, "end_char_pos": 667 }, { "type": "R", "before": "filament", "after": "filaments", "start_char_pos": 689, "end_char_pos": 697 }, { "type": "R", "before": "reviewed", "after": "presented", "start_char_pos": 1193, "end_char_pos": 1201 } ]
[ 0, 149, 320, 459, 611, 818, 972, 1179, 1425, 1628 ]
1103.2310
1
We consider a square-integrable semimartingale with conditionally independent increments and symmetric jump measure, and show that its discrete realized variance dominates its quadratic variation in increasing convex order. The result has immediate applications to the pricing of options on realized variance. For a class of models including time-changed Levy models and Sato processes with symmetric jumps our results show that options on variance are typically underpriced, if quadratic variation is substituted for the discretely sampled realized variance.
We consider a square-integrable semimartingale and investigate the convex order relations between its discrete, continuous and predictable quadratic variation. As the main results, we show that if the semimartingale has conditionally independent increments and symmetric jump measure, then its discrete realized variance dominates its quadratic variation in increasing convex order. The results have immediate applications to the pricing of options on realized variance. For a class of models including time-changed Levy models and Sato processes with symmetric jumps our results show that options on variance are typically underpriced, if quadratic variation is substituted for the discretely sampled realized variance.
[ { "type": "R", "before": "with", "after": "and investigate the convex order relations between its discrete, continuous and predictable quadratic variation. As the main results, we show that if the semimartingale has", "start_char_pos": 47, "end_char_pos": 51 }, { "type": "R", "before": "and show that", "after": "then", "start_char_pos": 117, "end_char_pos": 130 }, { "type": "R", "before": "result has", "after": "results have", "start_char_pos": 228, "end_char_pos": 238 } ]
[ 0, 223, 309 ]
1103.3294
1
We report calculations of position- and size-dependent opening probabilities for bubbles in double-stranded DNA . Our results are obtained from transfer-matrix solutions of (i) the Zimm-Bragg model for unconstrained DNA and of (ii) a self-consistent linearization of the Benham model for DNA under a superhelical constraint . The numerical efficiency of our method allows for the analysis of entire genomes and of random sequences of corresponding length (10^6-10^9 bp). At physiological temperatures and superhelical densities, opening is strongly cooperative with average bubble sizes of %DIFDELCMD < {\cal %%% O (100-1000) bp. In general, their statistics and location are dominated by sequence heterogeneity. For genomic DNA , bubbles are frequently located directly upstream of transcription start sites.
We present a general framework to study the thermodynamic denaturation of double-stranded DNA under superhelical stress. We report calculations of position- and size-dependent opening probabilities for bubbles along the sequence . Our results are obtained from transfer-matrix solutions of the Zimm-Bragg model for unconstrained DNA and of a self-consistent linearization of the Benham model for superhelical DNA . The numerical efficiency of our method allows for the analysis of entire genomes and of random sequences of corresponding length (10^6-10^9 base pairs). We show that, at physiological conditions, opening in superhelical DNA is strongly cooperative with average bubble sizes of %DIFDELCMD < {\cal %%% 10^2-10^3 base pairs (bp), and orders of magnitude higher than in unconstrained DNA. In heterogeneous sequences, the average degree of base-pair opening is self-averaging, while bubble localization and statistics are dominated by sequence disorder. Compared to random sequences with identical GC-content, genomic DNA has a significantly increased probability to open large bubbles under superhelical stress. These bubbles are frequently located directly upstream of transcription start sites.
[ { "type": "A", "before": null, "after": "present a general framework to study the thermodynamic denaturation of double-stranded DNA under superhelical stress. We", "start_char_pos": 3, "end_char_pos": 3 }, { "type": "R", "before": "in double-stranded DNA", "after": "along the sequence", "start_char_pos": 90, "end_char_pos": 112 }, { "type": "D", "before": "(i)", "after": null, "start_char_pos": 174, "end_char_pos": 177 }, { "type": "D", "before": "(ii)", "after": null, "start_char_pos": 228, "end_char_pos": 232 }, { "type": "R", "before": "DNA under a superhelical constraint", "after": "superhelical DNA", "start_char_pos": 289, "end_char_pos": 324 }, { "type": "R", "before": "bp). At physiological temperatures and superhelical densities, opening", "after": "base pairs). We show that, at physiological conditions, opening in superhelical DNA", "start_char_pos": 467, "end_char_pos": 537 }, { "type": "D", "before": "O", "after": null, "start_char_pos": 614, "end_char_pos": 615 }, { "type": "R", "before": "(100-1000) bp. In general, their statistics and location", "after": "10^2-10^3 base pairs (bp), and orders of magnitude higher than in unconstrained DNA. In heterogeneous sequences, the average degree of base-pair opening is self-averaging, while bubble localization and statistics", "start_char_pos": 616, "end_char_pos": 672 }, { "type": "R", "before": "heterogeneity. For genomic DNA ,", "after": "disorder. Compared to random sequences with identical GC-content, genomic DNA has a significantly increased probability to open large bubbles under superhelical stress. These", "start_char_pos": 699, "end_char_pos": 731 } ]
[ 0, 326, 471, 630, 713 ]
1103.3639
1
We perform wavelet decomposition of high frequency financial time series into high and low-energy spectral sectors . Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out , very unsuspectedly, that the high-energy component and a fraction of the low-energy contribution, defined by most (\simeq 98 \%) of the wavelet coefficients , can be neglected for the purpose of option premium evaluation with expiration times in the range of a few days to one month. The relevant low-energy component, which has attenuated volatility (reduction by a factor \simeq 1/10), is (i) normally distributed, (ii) long-range correlated for intraday prices and volatility, and (iii) time-reversal asymmetric. Our results indicate that the usual non-gaussian profiles of log-return distributions contain much more information than needed for option pricing, which is essentially dependent on hidden self-correlation properties of the underlying asset fluctuations .
We perform wavelet decomposition of high frequency financial time series into large and small time scale components . Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (\simeq 99.6 \%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation . The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day .
[ { "type": "R", "before": "high and low-energy spectral sectors", "after": "large and small time scale components", "start_char_pos": 78, "end_char_pos": 114 }, { "type": "R", "before": ", very unsuspectedly, that the high-energy component and a fraction of the low-energy contribution, defined", "after": "that the small scale component defined", "start_char_pos": 205, "end_char_pos": 312 }, { "type": "R", "before": "98", "after": "99.6", "start_char_pos": 329, "end_char_pos": 331 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 364, "end_char_pos": 365 }, { "type": "R", "before": "with expiration times in the range of a few days to one month. The relevant low-energy component, which has attenuated volatility (reduction by a factor \\simeq 1/10), is (i) normally distributed, (ii) long-range correlated for intraday prices and volatility, and (iii) time-reversal asymmetric. Our results indicate that the usual", "after": ". The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the", "start_char_pos": 428, "end_char_pos": 758 }, { "type": "R", "before": "profiles of log-return distributions contain much more information than needed for option pricing, which is essentially dependent on hidden self-correlation properties of the underlying asset fluctuations", "after": "statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day", "start_char_pos": 772, "end_char_pos": 976 } ]
[ 0, 116, 490, 722 ]
1103.4490
1
Regulatory dynamics has mathematical descriptions in terms of rate equations for continuous variables and, after discretization of state space and time , as Boolean maps. Here we study the effects of discretization for the notion of stability. In particular, we define the stability of a Boolean state sequence in consistency with the stability of the original continuous trajectory that has been discretized. For a class of randomly connected systems with randomly drawn Boolean functions, so-called Kauffman networks, we find that the dynamics is stable for almost all choices of parameter values. The so-called chaotic regime in Kauffman networks appears only as a damage spreading effect after flip perturbations . We conclude that regulatory systems amenable to state discretization do not exhibit chaotic behaviour. Both resilience to noise and sensitivity to input signals come "for free" as typical properties of sufficiently dense regulatory networks.
Regulatory dynamics in systems biology is often described by continuous rate equations for continuously varying chemical concentrations. Binary discretization of state space and time leads to Boolean dynamics. In the latter, the distinction between stable and unstable dynamics is usually made by checking damage after flip perturbations. Here we point out that this notion of stability is incompatible with the stability properties of the original continuous dynamics probed by small perturbations. In particular, random networks of nodes with large sensitivity yield stable dynamics, in contrast to the prediction of chaos obtained by flip perturbations in Boolean networks.
[ { "type": "R", "before": "has mathematical descriptions in terms of", "after": "in systems biology is often described by continuous", "start_char_pos": 20, "end_char_pos": 61 }, { "type": "R", "before": "continuous variables and, after", "after": "continuously varying chemical concentrations. Binary", "start_char_pos": 81, "end_char_pos": 112 }, { "type": "R", "before": ", as Boolean maps. Here we study the effects of discretization for the notion of stability. In particular, we define the stability of a Boolean state sequence in consistency", "after": "leads to Boolean dynamics. In the latter, the distinction between stable and unstable dynamics is usually made by checking damage after flip perturbations. Here we point out that this notion of stability is incompatible", "start_char_pos": 152, "end_char_pos": 325 }, { "type": "A", "before": null, "after": "properties", "start_char_pos": 345, "end_char_pos": 345 }, { "type": "R", "before": "trajectory that has been discretized. For a class of randomly connected systems with randomly drawn Boolean functions, so-called Kauffman networks, we find that the dynamics is stable for almost all choices of parameter values. The so-called chaotic regime in Kauffman networks appears only as a damage spreading effect after flip perturbations . We conclude that regulatory systems amenable to state discretization do not exhibit chaotic behaviour. Both resilience to noise and sensitivity to input signals come \"for free\" as typical properties of sufficiently dense regulatory", "after": "dynamics probed by small perturbations. In particular, random networks of nodes with large sensitivity yield stable dynamics, in contrast to the prediction of chaos obtained by flip perturbations in Boolean", "start_char_pos": 373, "end_char_pos": 951 } ]
[ 0, 170, 243, 410, 600, 719, 822 ]
1103.4490
2
Regulatory dynamics in systems biology is often described by continuous rate equations for continuously varying chemical concentrations. Binary discretization of state space and time leads to Boolean dynamics. In the latter, the distinction between stable and unstable dynamics is usually made by checking damage after flip perturbations . Here we point out that this notion of stability is incompatible with the stability properties of the original continuous dynamics probed by small perturbations . In particular, random networks of nodes with large sensitivity yield stable dynamics , in contrast to the prediction of chaos obtained by flip perturbationsin Boolean networks .
Regulatory dynamics in biology is often described by continuous rate equations for continuously varying chemical concentrations. Binary discretization of state space and time leads to Boolean dynamics. In the latter, the dynamics has been called unstable if flip perturbations lead to damage spreading . Here we find that this stability classification strongly differs from the stability properties of the original continuous dynamics under small perturbations of the state vector . In particular, random networks of nodes with large sensitivity yield stable dynamics under small perturbations .
[ { "type": "D", "before": "systems", "after": null, "start_char_pos": 23, "end_char_pos": 30 }, { "type": "R", "before": "distinction between stable and unstable dynamics is usually made by checking damage after flip perturbations", "after": "dynamics has been called unstable if flip perturbations lead to damage spreading", "start_char_pos": 229, "end_char_pos": 337 }, { "type": "R", "before": "point out that this notion of stability is incompatible with", "after": "find that this stability classification strongly differs from", "start_char_pos": 348, "end_char_pos": 408 }, { "type": "R", "before": "probed by small perturbations", "after": "under small perturbations of the state vector", "start_char_pos": 470, "end_char_pos": 499 }, { "type": "R", "before": ", in contrast to the prediction of chaos obtained by flip perturbationsin Boolean networks", "after": "under small perturbations", "start_char_pos": 587, "end_char_pos": 677 } ]
[ 0, 136, 209, 339, 501 ]
1103.5458
1
Single-nucleotide-resolution chemical mapping is a classic approach to characterizing structured RNA molecules that is being advanced by new chemistries, faster readouts, and coupling to computational algorithms. Recent tests have suggested that 2'-OH acylation data (SHAPE) can give near-zero error rates (0-4\%) in modeling RNA secondary structure. Here, we benchmark the method on six RNAs for which crystallographic data are available: tRNAphe and 5S rRNA from E. coli; the P4-P6 domain of the Tetrahymena group I ribozyme; and ligand-bound domains from riboswitches for adenine, cyclic di-GMP, and glycine. SHAPE-directed modeling of these RNAs gave significant errors ( false negative rate of 24 \%; false discovery rate of 24\%)and in two cases, worse models than secondary structure predictions without data . Variations of data processing or modeling do not mitigate these errors. Instead, as evaluated by bootstrapping, the information content of SHAPE data appears insufficient to define these RNAs' structures. Thus, SHAPE-directed RNA modeling is not always accurate, and helix-by-helix confidence estimates, as described herein, may be critical for interpreting results from this powerful methodology.
Single-nucleotide-resolution chemical mapping for structured RNA is being rapidly advanced by new chemistries, faster readouts, and coupling to computational algorithms. Recent tests have suggested that 2'-OH acylation data (SHAPE) can give near-zero error rates (0-4\%) in modeling RNA secondary structure. Here, we benchmark the method on six RNAs for which crystallographic data are available: tRNAphe and 5S rRNA from E. coli; the P4-P6 domain of the Tetrahymena group I ribozyme; and ligand-bound domains from riboswitches for adenine, cyclic di-GMP, and glycine. SHAPE-directed modeling of these RNAs gave significant errors ( overall false negative rate of 17 \%; false discovery rate of 21\%), with at least one error in five of the six cases . Variations of data processing or modeling do not mitigate these errors. Instead, as evaluated by bootstrapping, the information content of SHAPE data appears insufficient to define these RNAs' structures. Thus, SHAPE-directed RNA modeling is not always accurate, and helix-by-helix confidence estimates, as described herein, may be critical for interpreting results from this powerful methodology.
[ { "type": "R", "before": "is a classic approach to characterizing structured RNA molecules that is being", "after": "for structured RNA is being rapidly", "start_char_pos": 46, "end_char_pos": 124 }, { "type": "A", "before": null, "after": "overall", "start_char_pos": 676, "end_char_pos": 676 }, { "type": "R", "before": "24", "after": "17", "start_char_pos": 700, "end_char_pos": 702 }, { "type": "R", "before": "24\\%)and in two cases, worse models than secondary structure predictions without data", "after": "21\\%), with at least one error in five of the six cases", "start_char_pos": 731, "end_char_pos": 816 } ]
[ 0, 212, 350, 473, 527, 611, 706, 890, 1023 ]
1103.5796
1
Unlike their model membrane counterparts, biological membranes are richly decorated with a heterogeneous assembly of membrane proteins. These proteins are so tightly packed that their excluded area interactions can alter the free energy landscape associated with their conformational degrees of freedom . As a specific case study in these effects , we consider the impact of crowding on the gating tension for mechanosensitive channels. We show that crowding can alter the gating energies by more than \approx 2 k_BT , a substantial fraction of the gating energies themselves in some cases .
Unlike their model membrane counterparts, biological membranes are richly decorated with a heterogeneous assembly of membrane proteins. These proteins are so tightly packed that their excluded area interactions can alter the free energy landscape controlling the conformational transitions suffered by such proteins. For membrane channels, this effect can alter the critical membrane tension at which they undergo a transition from a closed to an open state, and therefore influence protein functionin vivo. Despite their obvious importance, crowding phenomena in membranes are much less well studied than in the cytoplasm. Using statistical mechanics results for hard disk liquids, we show that crowding induces an entropic tension in the membrane, which influences transitions that alter the projected area and circumference of a membrane protein . As a specific case study in this effect , we consider the impact of crowding on the gating properties of bacterial mechanosensitive membrane channels, which are thought to confer osmoprotection when these cells are subjected to osmotic shock. We find that crowding can alter the gating energies by more than 2 \; k_BT in physiological conditions , a substantial fraction of the total gating energies in some cases . Given the ubiquity of membrane crowding, the nonspecific nature of excluded volume interactions, and the fact that the function of many membrane proteins involve significant conformational changes, this specific case study highlights a general aspect in the function of membrane proteins .
[ { "type": "R", "before": "associated with their conformational degrees of freedom", "after": "controlling the conformational transitions suffered by such proteins. For membrane channels, this effect can alter the critical membrane tension at which they undergo a transition from a closed to an open state, and therefore influence protein function", "start_char_pos": 247, "end_char_pos": 302 }, { "type": "A", "before": null, "after": "in vivo", "start_char_pos": 302, "end_char_pos": 302 }, { "type": "A", "before": null, "after": ". Despite their obvious importance, crowding phenomena in membranes are much less well studied than in the cytoplasm. Using statistical mechanics results for hard disk liquids, we show that crowding induces an entropic tension in the membrane, which influences transitions that alter the projected area and circumference of a membrane protein", "start_char_pos": 302, "end_char_pos": 302 }, { "type": "R", "before": "these effects", "after": "this effect", "start_char_pos": 333, "end_char_pos": 346 }, { "type": "R", "before": "tension for mechanosensitive channels. We show", "after": "properties of bacterial mechanosensitive membrane channels, which are thought to confer osmoprotection when these cells are subjected to osmotic shock. We find", "start_char_pos": 398, "end_char_pos": 444 }, { "type": "D", "before": "\\approx", "after": null, "start_char_pos": 502, "end_char_pos": 509 }, { "type": "A", "before": null, "after": "\\;", "start_char_pos": 512, "end_char_pos": 512 }, { "type": "A", "before": null, "after": "in physiological conditions", "start_char_pos": 518, "end_char_pos": 518 }, { "type": "R", "before": "gating energies themselves", "after": "total gating energies", "start_char_pos": 551, "end_char_pos": 577 }, { "type": "A", "before": null, "after": ". Given the ubiquity of membrane crowding, the nonspecific nature of excluded volume interactions, and the fact that the function of many membrane proteins involve significant conformational changes, this specific case study highlights a general aspect in the function of membrane proteins", "start_char_pos": 592, "end_char_pos": 592 } ]
[ 0, 135, 304, 436 ]
1104.0322
1
In a previous paper it was shown that a Markov-functional model with log-normally distributed rates in the terminal measure displays nonanalytic behaviour as a function of the volatility, which is similar to a phase transition in condensed matter physics. More precisely, certain expectation values have discontinuous derivatives with respect to the volatility at a certain critical value of the volatility. Here we discuss the implications of these results for the pricing of interest rates derivatives. We point out the presence of nonanalyticity effects in other quantities of the model, focusing on the properties of the Libor probability distribution function in a measure in which it is simply related to caplet prices. We show that the moments of this distribution function have nonanalytic dependence on the volatility , which are also similar to a phase transition. We study in some detail the pricing of caplets on Libor rates and Libor payments in arrears, and show that the convexity adjustment for the latter is also nonanalytic in the model volatility .
We consider an interest rate model with log-normally distributed rates in the terminal measure in discrete time. Such models are used in financial practice as parametric versions of the Markov functional model, or as approximations to the log-normal Libor market model. We show that the model has two distinct regimes, at high and low volatilities, with different qualitative behavior. The two regimes are separated by a sharp transition, which is similar to a phase transition in condensed matter physics. We study the behavior of the model in the large volatility phase, and discuss the implications of the phase transition for the pricing of interest rate derivatives. In the large volatility phase, certain expectation values and convexity adjustments have an explosive behavior. For sufficiently low volatilities the caplet smile is log-normal to a very good approximation, while in the large volatility phase the model develops a non-trivial caplet skew. The phenomenon discussed here imposes thus an upper limit on the volatilities for which the model behaves as intended .
[ { "type": "R", "before": "In a previous paper it was shown that a Markov-functional", "after": "We consider an interest rate", "start_char_pos": 0, "end_char_pos": 57 }, { "type": "R", "before": "displays nonanalytic behaviour as a function of the volatility,", "after": "in discrete time. Such models are used in financial practice as parametric versions of the Markov functional model, or as approximations to the log-normal Libor market model. We show that the model has two distinct regimes, at high and low volatilities, with different qualitative behavior. The two regimes are separated by a sharp transition,", "start_char_pos": 124, "end_char_pos": 187 }, { "type": "R", "before": "More precisely, certain expectation values have discontinuous derivatives with respect to the volatility at a certain critical value of the volatility. Here we", "after": "We study the behavior of the model in the large volatility phase, and", "start_char_pos": 256, "end_char_pos": 415 }, { "type": "R", "before": "these results", "after": "the phase transition", "start_char_pos": 444, "end_char_pos": 457 }, { "type": "R", "before": "rates derivatives. We point out the presence of nonanalyticity effects in other quantities of the model, focusing on the properties of the Libor probability distribution function in a measure in which it is simply related to caplet prices. We show that the moments of this distribution function have nonanalytic dependence on the volatility , which are also similar to a phase transition. We study in some detail the pricing of caplets on Libor rates and Libor payments in arrears, and show that the convexity adjustment for", "after": "rate derivatives. In the large volatility phase, certain expectation values and convexity adjustments have an explosive behavior. For sufficiently low volatilities the caplet smile is log-normal to a very good approximation, while in the large volatility phase", "start_char_pos": 486, "end_char_pos": 1010 }, { "type": "R", "before": "latter is also nonanalytic in the model volatility", "after": "model develops a non-trivial caplet skew. The phenomenon discussed here imposes thus an upper limit on the volatilities for which the model behaves as intended", "start_char_pos": 1015, "end_char_pos": 1065 } ]
[ 0, 255, 407, 504, 725, 874 ]
1104.0359
1
We study an asymptotic behaviour of the difference between value-at-risks VaR(L) and VaR(L+S) for heavy-tailed random variables L and S as an application to sensitivity analysis of quantitative operational risk management in the framework of an advanced measurement approach (AMA) of Basel II. We have different types of results according to the magnitude relationship of thickness of tails of L and S. Especially if the tail of S is enough thinner than the one of L, then VaR(L+S) - VaR(L) is asymptotically equivalent to an expected loss of S when L and S are independent . We also give some generalized results without the assumption of independence .
We study the asymptotic behaviour of the difference between the Value at Risks VaR(L) and VaR(L+S) for heavy tailed random variables L and S as an application to the sensitivity analysis of quantitative operational risk management in the framework of an advanced measurement approach (AMA) of Basel II. Here the variable L describes the loss amount of the present risk profile and S means the loss amount caused by an additional loss factor. We have different types of results according to the magnitude of the relationship of the thicknesses of the tails of L and S. Especially if the tail of S is sufficiently thinner than that of L, then the difference between prior and posterior risk amounts VaR(L+S) - VaR(L) is asymptotically equivalent to the component VaR of S (which is equal to the expected loss of S when L and S are independent ) .
[ { "type": "R", "before": "an", "after": "the", "start_char_pos": 9, "end_char_pos": 11 }, { "type": "R", "before": "value-at-risks", "after": "the Value at Risks", "start_char_pos": 59, "end_char_pos": 73 }, { "type": "R", "before": "heavy-tailed", "after": "heavy tailed", "start_char_pos": 98, "end_char_pos": 110 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 157, "end_char_pos": 157 }, { "type": "A", "before": null, "after": "Here the variable L describes the loss amount of the present risk profile and S means the loss amount caused by an additional loss factor.", "start_char_pos": 295, "end_char_pos": 295 }, { "type": "R", "before": "relationship of thickness of", "after": "of the relationship of the thicknesses of the", "start_char_pos": 358, "end_char_pos": 386 }, { "type": "R", "before": "enough thinner than the one", "after": "sufficiently thinner than that", "start_char_pos": 436, "end_char_pos": 463 }, { "type": "A", "before": null, "after": "the difference between prior and posterior risk amounts", "start_char_pos": 475, "end_char_pos": 475 }, { "type": "R", "before": "an", "after": "the component VaR of S (which is equal to the", "start_char_pos": 526, "end_char_pos": 528 }, { "type": "R", "before": ". We also give some generalized results without the assumption of independence", "after": ")", "start_char_pos": 577, "end_char_pos": 655 } ]
[ 0, 294, 578 ]
1104.1028
1
Shear stress is an important physical factor that regulates proliferation, migration and morphogenesis. In regenerative medicine, one approach to generate 3D tissues has been to seed a porous scaffold with cells and growth factors under pulsatile flow conditions. However, the resulting bioreactors frequently lack information on intrinsic shear stress and flow distributions . Recent studies have focused on optimizing the microstructural parameters of the scaffold to gain more control over the shear stress. In this study, we adopt a different approach whereby macroscopic flows are redirected throughout the bioreactor along patterned channels in the scaffold that result in optimized shear stress distributions . A topology optimization algorithm coupled to effective-medium Lattice Boltzmann simulations was devised to find an optimal channel design that yields a target shear stress that is uniformly distributed throughout the scaffold . The channel topology in the porous scaffold was varied using a combination of genetic algorithm and fuzzy logic. Unlike methods based on varying micro-architectures, the described approach has the ability to achieve a target shear stress and to distribute the shear stress uniformly throughout the scaffold. The ability of topology optimization to alter the shear stress distribution is implemented in experiments where magnetic resonance imaging (MRI) is used to image the flow field.
Shear stress is an important physical factor that regulates proliferation, migration and morphogenesis. In particular, the homeostasis of blood vessels is dependent on shear stress. To mimic this process ex vivo, efforts have been made to seed scaffolds with vascular and other cell types in the presence of growth factors and under pulsatile flow conditions. However, the resulting bioreactors lack information on shear stress and flow distributions within the scaffold. Consequently, it is difficult to interpret the effects of shear stress on cell function. Such knowledge would enable researchers to improve upon cell culture protocols. Recent work has focused on optimizing the microstructural parameters of the scaffold to fine tune the shear stress. In this study, we have adopted a different approach whereby flows are redirected throughout the bioreactor along channels patterned in the porous scaffold to yield shear stress distributions that are optimized for uniformity centered on a target value . A topology optimization algorithm coupled to computational fluid dynamics simulations was devised to this end . The channel topology in the porous scaffold was varied using a combination of genetic algorithm and fuzzy logic. The method is validated by experiments using magnetic resonance imaging (MRI) readouts of the flow field.
[ { "type": "R", "before": "regenerative medicine, one approach to generate 3D tissues has been to seed a porous scaffold with cells and growth factors", "after": "particular, the homeostasis of blood vessels is dependent on shear stress. To mimic this process ex vivo, efforts have been made to seed scaffolds with vascular and other cell types in the presence of growth factors and", "start_char_pos": 107, "end_char_pos": 230 }, { "type": "D", "before": "frequently", "after": null, "start_char_pos": 299, "end_char_pos": 309 }, { "type": "D", "before": "intrinsic", "after": null, "start_char_pos": 330, "end_char_pos": 339 }, { "type": "R", "before": ". Recent studies have", "after": "within the scaffold. Consequently, it is difficult to interpret the effects of shear stress on cell function. Such knowledge would enable researchers to improve upon cell culture protocols. Recent work has", "start_char_pos": 376, "end_char_pos": 397 }, { "type": "R", "before": "gain more control over", "after": "fine tune", "start_char_pos": 470, "end_char_pos": 492 }, { "type": "R", "before": "adopt", "after": "have adopted", "start_char_pos": 529, "end_char_pos": 534 }, { "type": "D", "before": "macroscopic", "after": null, "start_char_pos": 564, "end_char_pos": 575 }, { "type": "R", "before": "patterned channels in the scaffold that result in optimized", "after": "channels patterned in the porous scaffold to yield", "start_char_pos": 629, "end_char_pos": 688 }, { "type": "A", "before": null, "after": "that are optimized for uniformity centered on a target value", "start_char_pos": 716, "end_char_pos": 716 }, { "type": "R", "before": "effective-medium Lattice Boltzmann", "after": "computational fluid dynamics", "start_char_pos": 764, "end_char_pos": 798 }, { "type": "R", "before": "find an optimal channel design that yields a target shear stress that is uniformly distributed throughout the scaffold", "after": "this end", "start_char_pos": 826, "end_char_pos": 944 }, { "type": "R", "before": "Unlike methods based on varying micro-architectures, the described approach has the ability to achieve a target shear stress and to distribute the shear stress uniformly throughout the scaffold. The ability of topology optimization to alter the shear stress distribution is implemented in experiments where", "after": "The method is validated by experiments using", "start_char_pos": 1060, "end_char_pos": 1366 }, { "type": "R", "before": "is used to image", "after": "readouts of", "start_char_pos": 1400, "end_char_pos": 1416 } ]
[ 0, 103, 263, 510, 718, 946, 1059, 1254 ]
1104.1421
1
The mechanical properties of gram-negative bacteria are governed by a rigid peptidoglycan (PG) cell wall and the turgor pressure generated by the large concentration of solutes in the cytoplasm. The elasticity of the PG has been measured in bulk and in isolated sacculi and shown to be compliant compared to the overall stiffness of the cell itself. However, the stiffness of the cell wall in live cells has not been measured. In particular, the effects that pressure-induced stress might have on the stiffness of the mesh-like PG network have not been addressed even though polymeric materials often exhibit large amounts of stress-stiffening. We study bulging Escherichia coli cells using atomic force microscopy to separate the contributions of the cell wall and turgor pressure to the overall cell stiffness. We find strong evidence of power-law stress-stiffening in the E. coli cell wall, with an exponent of 1.07 \pm 0.25 , such that the wall is significantly stiffer in live cells (E %DIFDELCMD < \sim32\pm10 %%% MPa\pm \pm ) than in unpressurized saculli . These measurements also indicate that the turgor pressure in E. coli is 26 \pm 4 kPa.
We study intact and bulging Escherichia coli cells using atomic force microscopy to separate the contributions of the cell wall and turgor pressure to the overall cell stiffness. We find strong evidence of power-law stress-stiffening in the E. coli cell wall, with an exponent of 1.22 \pm 0.12 , such that the wall is significantly stiffer in intact cells (E %DIFDELCMD < \sim32\pm10 %%% = 23\pm 8 MPa and 49\pm 20 MPa in the axial and circumferential directions ) than in unpressurized sacculi . These measurements also indicate that the turgor pressure in living cells E. coli is 29 \pm 3 kPa.
[ { "type": "R", "before": "The mechanical properties of gram-negative bacteria are governed by a rigid peptidoglycan (PG) cell wall and the turgor pressure generated by the large concentration of solutes in the cytoplasm. The elasticity of the PG has been measured in bulk and in isolated sacculi and shown to be compliant compared to the overall stiffness of the cell itself. However, the stiffness of the cell wall in live cells has not been measured. In particular, the effects that pressure-induced stress might have on the stiffness of the mesh-like PG network have not been addressed even though polymeric materials often exhibit large amounts of stress-stiffening. We study", "after": "We study intact and", "start_char_pos": 0, "end_char_pos": 653 }, { "type": "R", "before": "1.07", "after": "1.22", "start_char_pos": 914, "end_char_pos": 918 }, { "type": "R", "before": "0.25", "after": "0.12", "start_char_pos": 923, "end_char_pos": 927 }, { "type": "R", "before": "live", "after": "intact", "start_char_pos": 977, "end_char_pos": 981 }, { "type": "R", "before": "MPa", "after": "= 23", "start_char_pos": 1020, "end_char_pos": 1023 }, { "type": "A", "before": null, "after": "8 MPa and 49", "start_char_pos": 1027, "end_char_pos": 1027 }, { "type": "A", "before": null, "after": "20 MPa in the axial and circumferential directions", "start_char_pos": 1031, "end_char_pos": 1031 }, { "type": "R", "before": "saculli", "after": "sacculi", "start_char_pos": 1056, "end_char_pos": 1063 }, { "type": "A", "before": null, "after": "living cells", "start_char_pos": 1127, "end_char_pos": 1127 }, { "type": "R", "before": "26", "after": "29", "start_char_pos": 1139, "end_char_pos": 1141 }, { "type": "R", "before": "4", "after": "3", "start_char_pos": 1146, "end_char_pos": 1147 } ]
[ 0, 194, 349, 426, 644, 812, 1065 ]
1104.1773
1
We develop a dynamic point process model of correlated default timing in a portfolio of firms, and analyze typical and atypical default profiles in the limit as the size of the pool grows. In our model, a name defaults at a stochastic intensity that is influenced by an idiosyncratic risk process, a systematic risk process common to all names , and past defaults. We prove a law of large numbers for the default rate in the pool, which describes the "typical" behavior of defaults . Large deviation arguments are then used to identify the way that atypically large (i.e., "rare") default clusters are most likely to occur. Our results give insights into how different sources of default correlation interact to generate excessive portfolio losses .
We develop a dynamic point process model of correlated default timing in a portfolio of firms, and analyze typical default profiles in the limit as the size of the pool grows. In our model, a firm defaults at a stochastic intensity that is influenced by an idiosyncratic risk process, a systematic risk process common to all firms , and past defaults. We prove a law of large numbers for the default rate in the pool, which describes the "typical" behavior of defaults .
[ { "type": "D", "before": "and atypical", "after": null, "start_char_pos": 115, "end_char_pos": 127 }, { "type": "R", "before": "name", "after": "firm", "start_char_pos": 205, "end_char_pos": 209 }, { "type": "R", "before": "names", "after": "firms", "start_char_pos": 338, "end_char_pos": 343 }, { "type": "D", "before": ". Large deviation arguments are then used to identify the way that atypically large (i.e., \"rare\") default clusters are most likely to occur. Our results give insights into how different sources of default correlation interact to generate excessive portfolio losses", "after": null, "start_char_pos": 482, "end_char_pos": 747 } ]
[ 0, 188, 364, 623 ]
1104.2499
1
This work presents a methodology for studying active Brownian dynamics on ratchet potentials using interoperating OpenCL and OpenGL frameworks. Programing details along with optimization issues are discussed, followed by a comparison of performance on different devices. Time of visualization using OpenGL sharing buffer with OpenCL has been tested against another technique which, while using OpenGL, does not share memory buffer with OpenCL. Both methods has been compared with visualizing data to external software - gnuplot. OpenCL/OpenGL interoperating method has been found the most appropriate to visualize large set of data for which calculation itself is not very long.
This work presents a methodology for studying active Brownian dynamics on ratchet potentials using interoperating OpenCL and OpenGL frameworks. Programing details along with optimization issues are discussed, followed by a com- parison of performance on different devices. Time of visualization using OpenGL sharing buffer with OpenCL has been tested against another technique which, while using OpenGL, does not share memory buffer with OpenCL. Both methods have been compared with visualizing data to an external software - gnuplot. OpenCL/OpenGL interoperating method has been found the most appropriate to visualize any large set of data for which calculation itself is not very long.
[ { "type": "R", "before": "comparison", "after": "com- parison", "start_char_pos": 223, "end_char_pos": 233 }, { "type": "R", "before": "has", "after": "have", "start_char_pos": 457, "end_char_pos": 460 }, { "type": "A", "before": null, "after": "an", "start_char_pos": 500, "end_char_pos": 500 }, { "type": "A", "before": null, "after": "any", "start_char_pos": 615, "end_char_pos": 615 } ]
[ 0, 143, 270, 443, 529 ]
1104.2613
1
We present the first systematic measurement of critical dynamics of a 2D liquid with conserved order parameter embedded in bulk 3D fluid, here a lipid membrane in water . We use scaling to collapse time dependent structure factors at multiple wavenumbers and find the effective dynamic exponent z_{ eff } \approx 3 near the critical point. This value and the form of the structure factor are in excellent agreement with recent theoretical predictions invoking hydrodynamic coupling to the bulk. Our result may have biological relevance since membranes isolated from cells are near miscibility critical points .
Near a critical point, the time scale of thermally-induced fluctuations diverges in a manner determined by the dynamic universality class. Experiments have verified predicted 3D dynamic critical exponents in many systems, but similar experiments in 2D have been lacking for the case of conserved order parameter . Here we analyze time-dependent correlation functions of a quasi-2D lipid bilayer in water to show that its critical dynamics agree with a recently predicted universality class. In particular, the effective dynamic exponent z_{ \text{eff } crosses over from \sim 2 to \sim 3 as the correlation length of fluctuations exceeds a hydrodynamic length set by the membrane and bulk viscosities .
[ { "type": "R", "before": "We present the first systematic measurement of critical dynamics of a", "after": "Near a critical point, the time scale of thermally-induced fluctuations diverges in a manner determined by the dynamic universality class. Experiments have verified predicted 3D dynamic critical exponents in many systems, but similar experiments in", "start_char_pos": 0, "end_char_pos": 69 }, { "type": "R", "before": "liquid with", "after": "have been lacking for the case of", "start_char_pos": 73, "end_char_pos": 84 }, { "type": "R", "before": "embedded in bulk 3D fluid, here a lipid membrane in water . We use scaling to collapse time dependent structure factors at multiple wavenumbers and find", "after": ". Here we analyze time-dependent correlation functions of a quasi-2D lipid bilayer in water to show that its critical dynamics agree with a recently predicted universality class. In particular,", "start_char_pos": 111, "end_char_pos": 263 }, { "type": "R", "before": "eff", "after": "\\text{eff", "start_char_pos": 299, "end_char_pos": 302 }, { "type": "R", "before": "\\approx", "after": "crosses over from \\sim 2 to \\sim", "start_char_pos": 305, "end_char_pos": 312 }, { "type": "R", "before": "near the critical point. This value and the form of the structure factor are in excellent agreement with recent theoretical predictions invoking hydrodynamic coupling to the bulk. Our result may have biological relevance since membranes isolated from cells are near miscibility critical points", "after": "as the correlation length of fluctuations exceeds a hydrodynamic length set by the membrane and bulk viscosities", "start_char_pos": 315, "end_char_pos": 608 } ]
[ 0, 170, 339, 494 ]
1104.3559
1
One popular assumption regarding biological systems is that traits have evolved to be optimized with respect to function. This is a standard goal in evolutionary computation, and while not always embraced in the biological sciences, is an underlying assumption of what happens when fitness is maximized. The implication of this is that a signaling pathway or phylogeny should show evidence of minimizing the number of steps required to produce a biochemical product or phenotypic adaptation. In this paper, it will be shown that a principle of "maximum intermediate steps" may also characterize complex biological systems, especially those in which extreme historical contingency or a combination of mutation and recombination are key features. The contribution to existing literature is two-fold: demonstrating both the potential for non-optimality in engineered systems with "lifelike" attributes, and the underpinnings of non-optimality in naturalistic contexts. This will be demonstrated by using the Rube Goldberg Machine (RGM) analogy. Mechanical RGMs will be introduced, and their relationship to conceptual biological RGMs explained . Exemplars of these biological RGMs and their evolution (e.g. introduction of mutations and recombination-like inversions) will be demonstrated using block diagrams . The conceptual biological RGM will then be mapped to an artificial vascular system, which can be modeled using microfluidic-like structures. Theoretical expectations will be presented, particularly regarding whether or not maximum intermediate steps equates to the rescue or reuse of traits compromised by previous mutations or inversions. Considerations for future work and applications will then be discussed .
One popular assumption regarding biological systems is that traits have evolved to be optimized with respect to function. This is a standard goal in evolutionary computation, and while not always embraced in the biological sciences, is an underlying assumption of what happens when fitness is maximized. The implication of this is that a signaling pathway or phylogeny should show evidence of minimizing the number of steps required to produce a biochemical product or phenotypic adaptation. In this paper, it will be shown that a principle of "maximum intermediate steps" may also characterize complex biological systems, especially those in which extreme historical contingency or a combination of mutation and recombination are key features. The contribution to existing literature is two-fold: demonstrating both the potential for non-optimality in engineered systems with "lifelike" attributes, and the underpinnings of non-optimality in naturalistic contexts. This will be demonstrated by using the Rube Goldberg Machine (RGM) analogy. Mechanical RGMs will be introduced, and their relationship to conceptual biological RGMs . Exemplars of these biological RGMs and their evolution (e.g. introduction of mutations and recombination-like inversions) will be demonstrated using block diagrams and interconnections with complex networks (called convolution architectures) . The conceptual biological RGM will then be mapped to an artificial vascular system, which can be modeled using microfluidic-like structures. Theoretical expectations will be presented, particularly regarding whether or not maximum intermediate steps equates to the rescue or reuse of traits compromised by previous mutations or inversions. Considerations for future work and applications will then be discussed , including the incorporation of such convolution architectures into complex networks .
[ { "type": "D", "before": "explained", "after": null, "start_char_pos": 1131, "end_char_pos": 1140 }, { "type": "A", "before": null, "after": "and interconnections with complex networks (called convolution architectures)", "start_char_pos": 1307, "end_char_pos": 1307 }, { "type": "A", "before": null, "after": ", including the incorporation of such convolution architectures into complex networks", "start_char_pos": 1721, "end_char_pos": 1721 } ]
[ 0, 121, 303, 491, 744, 965, 1041, 1309, 1450, 1649 ]
1104.3583
1
Recent work of Dupire (2005) and Carr Lee (2010) has highlighted the importance of understanding the Skorokhod embedding originally proposed by Root (1969) for the model-independent hedging of variance options. Root's work shows that there exists a barrier from which one may define a stopping time which solves the Skorokhod embedding problem. This construction has the remarkable property, proved by Rost (1976) , that it minimises the variance of the stopping time among all solutions. In this work, we prove a characterisation of Root's barrier in terms of the solution to a variational inequality, and we give an alternative proof of the optimality property which has an important consequence for the construction of subhedging strategies in the financial context.
Recent work of Dupire and Carr and Lee has highlighted the importance of understanding the Skorokhod embedding originally proposed by Root for the model-independent hedging of variance options. Root's work shows that there exists a barrier from which one may define a stopping time which solves the Skorokhod embedding problem. This construction has the remarkable property, proved by Rost , that it minimizes the variance of the stopping time among all solutions. In this work, we prove a characterization of Root's barrier in terms of the solution to a variational inequality, and we give an alternative proof of the optimality property which has an important consequence for the construction of subhedging strategies in the financial context.
[ { "type": "D", "before": "(2005) and Carr", "after": null, "start_char_pos": 22, "end_char_pos": 37 }, { "type": "R", "before": "Lee (2010)", "after": "and Carr and Lee", "start_char_pos": 38, "end_char_pos": 48 }, { "type": "D", "before": "(1969)", "after": null, "start_char_pos": 149, "end_char_pos": 155 }, { "type": "D", "before": "(1976)", "after": null, "start_char_pos": 407, "end_char_pos": 413 }, { "type": "R", "before": "minimises", "after": "minimizes", "start_char_pos": 424, "end_char_pos": 433 }, { "type": "R", "before": "characterisation", "after": "characterization", "start_char_pos": 514, "end_char_pos": 530 } ]
[ 0, 210, 344, 488 ]
1104.4249
1
Globalization has created an international financial network of countries linked by trade in goods and assets. These linkages allow for more efficient resource allocation across borders, but also create potentially hazardous financial interdependence, such as the great financial distress caused by the 2010 threat of Greece 's default or the 2008 collapse of Lehman Brothers. Increasingly, the tools of network science are being used as a means of articulating in a quantitative way measures of financial interdependence and stability . In this paper , we employ two networkanalysis methods on the international investment network derived from the IMF Coordinated Portfolio Investment Survey (CPIS) . Via the "error and attack" methodology [1], we show that the CPIS network is of the " robust- yet-fragile " type, similar to a wide variety of evolved networks 1, 2 . In particular, the network is robust to random shocks but very fragile when key financial centers (e. g., the United States and the Cayman Islands) are affected. Using loss-given-default dynamics [ 3 ], "extinction analysis" simulations show that interdependence increased from 2001 to 2007. Our simulations further indicate that default by a single relatively small country like Greece can be absorbed by the network, but that default in combination with defaults of other PIGS countries (Portugal, Ireland, and Spain) could lead to a massive extinction cascade in the global economy. Adaptations of this approach could form the basis for risk metrics designed to monitor and guide policy formulation for the stability of the global economy.
The recent financial crisis of 2008 and the 2011 indebtedness of Greece highlight the importance of understanding the structure of the global financial network . In this paper we set out to analyze and characterize this network, as captured by the IMF Coordinated Portfolio Investment Survey (CPIS) , in two ways. First, through an adaptation of the "error and attack" methodology [1], we show that the network is of the " robust-yet-fragile " type, a topology found in a wide variety of evolved networks . We compare these results against four common null-models, generated only from first-order statistics of the empirical data. In addition, we suggest a fifth, log-normal model, which generates networks that seem to match the empirical one more closely. Still, this model does not account for several higher order network statistics, which reenforces the added value of the higher-order analysis. Second, using loss-given-default dynamics [ 2 ], we model financial interdependence and potential cascading of financial distress through the network. Preliminary simulations indicate that default by a single relatively small country like Greece can be absorbed by the network, but that default in combination with defaults of other PIGS countries (Portugal, Ireland, and Spain) could lead to a massive extinction cascade in the global economy.
[ { "type": "R", "before": "Globalization has created an international financial network of countries linked by trade in goods and assets. These linkages allow for more efficient resource allocation across borders, but also create potentially hazardous financial interdependence, such as the great financial distress caused by the 2010 threat of Greece 's default or the 2008 collapse of Lehman Brothers. Increasingly, the tools of network science are being used as a means of articulating in a quantitative way measures of financial interdependence and stability", "after": "The recent financial crisis of 2008 and the 2011 indebtedness of Greece highlight the importance of understanding the structure of the global financial network", "start_char_pos": 0, "end_char_pos": 535 }, { "type": "R", "before": ", we employ two networkanalysis methods on the international investment network derived from the", "after": "we set out to analyze and characterize this network, as captured by the", "start_char_pos": 552, "end_char_pos": 648 }, { "type": "R", "before": ". Via", "after": ", in two ways. First, through an adaptation of", "start_char_pos": 700, "end_char_pos": 705 }, { "type": "D", "before": "CPIS", "after": null, "start_char_pos": 763, "end_char_pos": 767 }, { "type": "R", "before": "robust- yet-fragile", "after": "robust-yet-fragile", "start_char_pos": 788, "end_char_pos": 807 }, { "type": "R", "before": "similar to a", "after": "a topology found in a", "start_char_pos": 816, "end_char_pos": 828 }, { "type": "D", "before": "1, 2", "after": null, "start_char_pos": 862, "end_char_pos": 866 }, { "type": "R", "before": ". In particular, the network is robust to random shocks but very fragile when key financial centers (e. g., the United States and the Cayman Islands) are affected. Using", "after": ". We compare these results against four common null-models, generated only from first-order statistics of the empirical data. In addition, we suggest a fifth, log-normal model, which generates networks that seem to match the empirical one more closely. Still, this model does not account for several higher order network statistics, which reenforces the added value of the higher-order analysis. Second, using", "start_char_pos": 867, "end_char_pos": 1036 }, { "type": "R", "before": "3", "after": "2", "start_char_pos": 1067, "end_char_pos": 1068 }, { "type": "R", "before": "\"extinction analysis\" simulations show that interdependence increased from 2001 to 2007. Our simulations further", "after": "we model financial interdependence and potential cascading of financial distress through the network. Preliminary simulations", "start_char_pos": 1072, "end_char_pos": 1184 }, { "type": "D", "before": "the global economy. Adaptations of this approach could form the basis for risk metrics designed to monitor and guide policy formulation for the stability of", "after": null, "start_char_pos": 1435, "end_char_pos": 1591 } ]
[ 0, 110, 376, 537, 1030, 1160, 1454 ]
1104.4380
1
The World Trade Web (WTW) is a weighted network whose nodes correspond to countries with edge weights reflecting the value of imports and/or exports between countries. In this paper we introduce to this macroeconomic system the notion of extinction analysis, a technique often used in the analysis of ecosystems, for the purposes of investigating the robustness of this network. In particular, we subject the WTW to a principled set of in silico "knockout experiments ", akin to those carried out in the investigation of food webs, but suitably adapted to this macroeconomic network. Broadly, our experiments show that over time the WTW moves to a "robust yet fragile" configuration where is it robust under random attacks but fragile under targeted attack. This change in stability is highly correlated with the connectance of the network. Moreover, there is evidence of sharp change in the structure of the network in the 1960s and 1970s, where the most measures of robustness rapidly increase before resuming a declining trend. We interpret these results in the context in the post-World War II move towards globalization. Globalization coincides with the sharp increase in robustness but also with a rise in those measures (e.g., connectance and trade imbalances) which correlate with decreases in robustness. The peak of robustness is reached after the onset of globalization policy but before the negative impacts are substantial. In this way we anticipate that knockout experiments like these can play an important role in the evaluation of the stability of economic systems.
The World Trade Web (WTW) is a weighted network whose nodes correspond to countries with edge weights reflecting the value of imports and/or exports between countries. In this paper we introduce to this macroeconomic system the notion of extinction analysis, a technique often used in the analysis of ecosystems, for the purposes of investigating the robustness of this network. In particular, we subject the WTW to a principled set of in silico "knockout experiments ," akin to those carried out in the investigation of food webs, but suitably adapted to this macroeconomic network. Broadly, our experiments show that over time the WTW moves to a "robust yet fragile" configuration where it is robust to random failures but fragile under targeted attack. This change in stability is highly correlated with the connectance (edge density) of the network. Moreover, there is evidence of a sharp change in the structure of the network in the 1960s and 1970s, where most measures of robustness rapidly increase before resuming a declining trend. We interpret these results in the context in the post-World War II move towards globalization. Globalization coincides with the sharp increase in robustness but also with a rise in those measures (e.g., connectance and trade imbalances) which correlate with decreases in robustness. The peak of robustness is reached after the onset of globalization policy but before the negative impacts are substantial. These analyses depend on a simple model of dynamics that rebalances the trade flow upon network perturbation, the most dramatic of which is node deletion. More subtle and textured forms of perturbation lead to the definition of other measures of node importance as well as vulnerability. We anticipate that experiments and measures like these can play an important role in the evaluation of the stability of economic systems.
[ { "type": "R", "before": "\",", "after": ",\"", "start_char_pos": 468, "end_char_pos": 470 }, { "type": "R", "before": "is it robust under random attacks", "after": "it is robust to random failures", "start_char_pos": 689, "end_char_pos": 722 }, { "type": "A", "before": null, "after": "(edge density)", "start_char_pos": 825, "end_char_pos": 825 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 873, "end_char_pos": 873 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 949, "end_char_pos": 952 }, { "type": "R", "before": "In this way we anticipate that knockout experiments", "after": "These analyses depend on a simple model of dynamics that rebalances the trade flow upon network perturbation, the most dramatic of which is node deletion. More subtle and textured forms of perturbation lead to the definition of other measures of node importance as well as vulnerability. We anticipate that experiments and measures", "start_char_pos": 1439, "end_char_pos": 1490 } ]
[ 0, 167, 378, 583, 757, 841, 1032, 1127, 1315, 1438 ]
1104.4616
1
Prion diseases {\it (e.g. Creutzfeldt-Jakob disease (CJD), variant CJD (vCJD), Gerstmann-Str\text{a}ussler-Scheinker syndrome (GSS), Fatal Familial Insomnia (FFI) and Kuru in humans, scrapie in sheep, bovine spongiform encephalopathy (BSE or `mad-cow' disease) and chronic wasting disease (CWD) in cattles)} are invariably fatal and highly infectious neurodegenerative diseases affecting humans and animals. However, by now there have not been some effective therapeutic approaches or medications to treat all these prion diseases. Rabbits, dogs, and horses are the only mammalian species reported to be resistant to infection from prion diseases isolated from other species. Recently, the \beta2 -- \alpha2 loop has been reported to contribute to their protein structural stabilities. The author has found that rabbit prion protein has a strong salt bridge ASP177-ARG163 (like a taut bow string) keeping this loop linked. This paper confirms that this salt bridge also contributes to the structural stability of horse prion protein. Thus, the region of \beta2 -- \alpha2 loop should be a potential drug target region. Except for this very important salt bridge, other three important salt bridges GLU196--ARG156--HIS187 and GLU211--HIS177 are also found to greatly contribute to the structural stability of horse prion protein. Rich databases of salt bridges, hydrogen bonds and hydrophobic contacts for horse prion protein can be found in this paper.
Prion diseases {\it (e.g. Creutzfeldt-Jakob disease (CJD), variant CJD (vCJD), Gerstmann-Str\text{a}ussler-Scheinker syndrome (GSS), Fatal Familial Insomnia (FFI) and Kuru in humans, scrapie in sheep, bovine spongiform encephalopathy (BSE or `mad-cow' disease) and chronic wasting disease (CWD) in cattles)} are invariably fatal and highly infectious neurodegenerative diseases affecting humans and animals. However, by now there have not been some effective therapeutic approaches or medications to treat all these prion diseases. Rabbits, dogs, and horses are the only mammalian species reported to be resistant to infection from prion diseases isolated from other species. Recently, the \beta2 - \alpha2 loop has been reported to contribute to their protein structural stabilities. The author has found that rabbit prion protein has a strong salt bridge ASP177-ARG163 (like a taut bow string) keeping this loop linked. This paper confirms that this salt bridge also contributes to the structural stability of horse prion protein. Thus, the region of \beta2 - \alpha2 loop might be a potential drug target region. Besides this very important salt bridge, other four important salt bridges GLU196-ARG156-HIS187, ARG156-ASP202 and GLU211-HIS177 are also found to greatly contribute to the structural stability of horse prion protein. Rich databases of salt bridges, hydrogen bonds and hydrophobic contacts for horse prion protein can be found in this paper.
[ { "type": "R", "before": "--", "after": "-", "start_char_pos": 697, "end_char_pos": 699 }, { "type": "R", "before": "--", "after": "-", "start_char_pos": 1061, "end_char_pos": 1063 }, { "type": "R", "before": "should", "after": "might", "start_char_pos": 1077, "end_char_pos": 1083 }, { "type": "R", "before": "Except for", "after": "Besides", "start_char_pos": 1119, "end_char_pos": 1129 }, { "type": "R", "before": "three", "after": "four", "start_char_pos": 1169, "end_char_pos": 1174 }, { "type": "R", "before": "GLU196--ARG156--HIS187 and GLU211--HIS177", "after": "GLU196-ARG156-HIS187, ARG156-ASP202 and GLU211-HIS177", "start_char_pos": 1198, "end_char_pos": 1239 } ]
[ 0, 407, 531, 675, 785, 922, 1033, 1118, 1328 ]