doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
0812.3117
1
In this paper we develop an algorithm to calculate prices and Greeks of barrier options driven by a class of additive processes. Additive processes are time-inhomogeneous Levy processes, or equivalently, processes with independent but inhomogeneous increments . We obtain an explicit semi-analytical expression for the first-passage probability of an additive process with hyper-exponential jumps . The solution rests on a randomization and an explicit matrix Wiener- Hopf factorization. Employing this result we derive explicit expressions for the Laplace(-Fourier) transforms of prices and Greeks of digital and barrier options. As numerical illustration, the model is simultaneously calibrated to Stoxx50E call options at four different maturities and subsequently prices and Greeks of down-and-in digital and down-and-in call options are calculated . Comparison with Monte Carlo simulation results shows that the method is fast, accurate, and stable.
In this paper we develop an algorithm to calculate the prices and Greeks of barrier options in a hyper-exponential additive model with piecewise constant parameters . We obtain an explicit semi-analytical expression for the first-passage probability . The solution rests on a randomization and an explicit matrix Wiener-Hopf factorization. Employing this result we derive explicit expressions for the Laplace-Fourier transforms of the prices and Greeks of barrier options. As a numerical illustration, the prices and Greeks of down-and-in digital and down-and-in call options are calculated for a set of parameters obtained by a simultaneous calibration to Stoxx50E call options across strikes and four different maturities. By comparing the results with Monte-Carlo simulations, we show that the method is fast, accurate, and stable.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 51, "end_char_pos": 51 }, { "type": "R", "before": "driven by a class of additive processes. Additive processes are time-inhomogeneous Levy processes, or equivalently, processes with independent but inhomogeneous increments", "after": "in a hyper-exponential additive model with piecewise constant parameters", "start_char_pos": 89, "end_char_pos": 260 }, { "type": "D", "before": "of an additive process with hyper-exponential jumps", "after": null, "start_char_pos": 346, "end_char_pos": 397 }, { "type": "R", "before": "Wiener- Hopf", "after": "Wiener-Hopf", "start_char_pos": 461, "end_char_pos": 473 }, { "type": "R", "before": "Laplace(-Fourier) transforms of", "after": "Laplace-Fourier transforms of the", "start_char_pos": 550, "end_char_pos": 581 }, { "type": "D", "before": "digital and", "after": null, "start_char_pos": 603, "end_char_pos": 614 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 635, "end_char_pos": 635 }, { "type": "D", "before": "model is simultaneously calibrated to Stoxx50E call options at four different maturities and subsequently", "after": null, "start_char_pos": 664, "end_char_pos": 769 }, { "type": "R", "before": ". Comparison with Monte Carlo simulation results shows", "after": "for a set of parameters obtained by a simultaneous calibration to Stoxx50E call options across strikes and four different maturities. By comparing the results with Monte-Carlo simulations, we show", "start_char_pos": 855, "end_char_pos": 909 } ]
[ 0, 129, 262, 399, 488, 631 ]
0812.3867
1
We study the statistical properties of a simple genetic regulatory network that provides heterogeneity within a population of cells. This network consists of a binary genetic switch in which stochastic flipping between the two switch states is mediated by a "flipping" enzyme. Feedback between the switch state and the flipping rate is provided by a linear feedback mechanism: the flipping enzyme is only produced in the on switch state and the switching rate depends linearly on the copy number of the enzyme. ] We present a general solution for the steady-state statistics of the number of enzyme molecules in the on and off states, and for the flip time (persistence ) distributions for this model switch . We show that the statistics of the model are non-Poissonian, leading to a peak in the flip time distribution. We also show that this model can exhibit long-lived temporal correlations, thus providing a primitive form of cellular memory. Motivated by DNA replication as well as by evolutionary mechanisms involving gene duplication, we study the case of two switches in the same cell. This results in correlations between the two switches; these can either positive or negative depending on the parameter regime.
We study the statistical properties of a simple genetic regulatory network that provides heterogeneity within a population of cells. This network consists of a binary genetic switch in which stochastic flipping between the two switch states is mediated by a "flipping" enzyme. Feedback between the switch state and the flipping rate is provided by a linear feedback mechanism: the flipping enzyme is only produced in the on switch state and the switching rate depends linearly on the copy number of the enzyme. This work generalises the model of Phys. Rev. Lett., 101, 118104] to a broader class of linear feedback systems. We present a complete analytical solution for the steady-state statistics of the number of enzyme molecules in the on and off states, for the general case where the enzyme can mediate flipping in either direction. For this general case we also solve for the flip time distribution, making a connection to first passage and persistence problems in statistical physics . We show that the statistics of the model are non-Poissonian, leading to a peak in the flip time distribution. The occurrence of such a peak is analysed as a function of the parameter space. We present a new relation between the flip time distributions measured for two relevant choices of initial condition. We also introduce a new correlation measure to show that this model can exhibit long-lived temporal correlations, thus providing a primitive form of cellular memory. Motivated by DNA replication as well as by evolutionary mechanisms involving gene duplication, we study the case of two switches in the same cell. This results in correlations between the two switches; these can either positive or negative depending on the parameter regime.
[ { "type": "A", "before": null, "after": "This work generalises the model of", "start_char_pos": 511, "end_char_pos": 511 }, { "type": "A", "before": null, "after": "Phys. Rev. Lett., 101, 118104", "start_char_pos": 512, "end_char_pos": 512 }, { "type": "A", "before": null, "after": "to a broader class of linear feedback systems.", "start_char_pos": 514, "end_char_pos": 514 }, { "type": "R", "before": "general", "after": "complete analytical", "start_char_pos": 528, "end_char_pos": 535 }, { "type": "R", "before": "and for the flip time (persistence ) distributions for this model switch", "after": "for the general case where the enzyme can mediate flipping in either direction. For this general case we also solve for the flip time distribution, making a connection to first passage and persistence problems in statistical physics", "start_char_pos": 637, "end_char_pos": 709 }, { "type": "R", "before": "We also", "after": "The occurrence of such a peak is analysed as a function of the parameter space. We present a new relation between the flip time distributions measured for two relevant choices of initial condition. We also introduce a new correlation measure to", "start_char_pos": 822, "end_char_pos": 829 } ]
[ 0, 132, 276, 510, 711, 821, 948, 1095, 1150 ]
0812.3933
1
Sorting permutations by reversals and/or transpositions is an important genome rearrangement problem in computational molecular biology. From theoretical point of view, finding efficient algorithms for this problem and its variations are quite challenging. In this paper we consider the problem of sorting unsigned permutations by prefix reversals and prefix transpositions , where a prefix reversal or a prefix transposition is applied always at the unsorted suffix of the given permutation. For this problem we first present a 3-approximation algorithm, which performs close to 2 in practice. We further analyze the problem in more practical way and consider the variation where a certain number of prefix reversals are forced to happen in the sorting process. Here we achieve an improved approximation ratio of (3-\frac{3r{b(\pi)+r}), where r is the } number of prefix reversals that must happen in the sorting, when possible, and b(\pi)\geq 2r is the number of breakpoints in the given permutation. Again, in this variation our algorithm performs close to 1.6 in practice .
In this paper we study several variations of thepancake flipping problem, which is also well known as the problem ofsorting by prefix reversals. We consider the variations in the sorting process by adding with prefix reversals other similar operations such as prefix transpositions and prefix transreversals. These type of sorting problems have applications in interconnection networks and computational biology. We first study the problem of sorting unsigned permutations by prefix reversals and prefix transpositions and present a 3-approximation algorithm for this problem. Then we give a 2-approximation algorithm for sorting by prefix reversals and prefix transreversals. We also provide a 3-approximation algorithm for sorting by prefix reversals and prefix transpositions where the operations are always applied at the unsorted suffix of the permutation. We further analyze the problem in more practical way and {b(\pi)+r}), where r is the } show quantitatively how approximation ratios of our algorithms improve with the increase of number of prefix reversals applied by optimal algorithms. Finally, we present experimental results to support our analysis .
[ { "type": "R", "before": "Sorting permutations by reversals and/or transpositions is an important genome rearrangement problem in computational molecular biology. From theoretical point of view, finding efficient algorithms for this problem and its variations are quite challenging. In this paper we consider", "after": "In this paper we study several variations of the", "start_char_pos": 0, "end_char_pos": 282 }, { "type": "A", "before": null, "after": "pancake flipping problem", "start_char_pos": 282, "end_char_pos": 282 }, { "type": "A", "before": null, "after": ", which is also well known as the problem of", "start_char_pos": 282, "end_char_pos": 282 }, { "type": "A", "before": null, "after": "sorting by prefix reversals", "start_char_pos": 282, "end_char_pos": 282 }, { "type": "A", "before": null, "after": ". We consider the variations in the sorting process by adding with prefix reversals other similar operations such as prefix transpositions and prefix transreversals. These type of sorting problems have applications in interconnection networks and computational biology. We first study", "start_char_pos": 282, "end_char_pos": 282 }, { "type": "R", "before": ", where a prefix reversal or a prefix transposition is applied always", "after": "and present a 3-approximation algorithm for this problem. Then we give a 2-approximation algorithm for sorting by prefix reversals and prefix transreversals. We also provide a 3-approximation algorithm for sorting by prefix reversals and prefix transpositions where the operations are always applied", "start_char_pos": 374, "end_char_pos": 443 }, { "type": "R", "before": "given permutation. For this problem we first present a 3-approximation algorithm, which performs close to 2 in practice.", "after": "permutation.", "start_char_pos": 474, "end_char_pos": 594 }, { "type": "D", "before": "consider the variation where a certain number of prefix reversals are forced to happen in the sorting process. Here we achieve an improved approximation ratio of (3-\\frac{3r", "after": null, "start_char_pos": 652, "end_char_pos": 825 }, { "type": "A", "before": null, "after": "show quantitatively how approximation ratios of our algorithms improve with the increase of", "start_char_pos": 855, "end_char_pos": 855 }, { "type": "R", "before": "that must happen in the sorting, when possible, and b(\\pi)\\geq 2r is the number of breakpoints in the given permutation. Again, in this variation our algorithm performs close to 1.6 in practice", "after": "applied by optimal algorithms. Finally, we present experimental results to support our analysis", "start_char_pos": 883, "end_char_pos": 1076 } ]
[ 0, 136, 256, 492, 594, 762, 1003 ]
0812.4619
1
The system-level dynamics of biomolecular interactions can be difficult to specify and simulate using methods that involve explicit specification of a chemical reaction network. Here, we present a stochastic simulation method for determining the kinetics of multivalent biomolecular interactions , which has a computational cost independent of reaction network size . The method is based on sampling a set of chemical transformation classes that characterize the interactions in a system. We apply the method to simulate multivalent ligand-receptor interaction systems . Simulation results reveal insights into ligand-receptor binding kinetics that are not available from previously developed equilibrium models .
The system-level dynamics of biomolecular interactions can be difficult to simulate using methods that require explicit specification of a chemical reaction network. Here, we present and evaluate a rejection-free stochastic simulation method for determining the kinetics of multivalent biomolecular interactions . The method has a computational cost independent of reaction network size , and it is based on sampling a set of chemical transformation classes that characterize the interactions in a system. We apply the method to simulate interactions of an m-valent ligand with an n-valent cell-surface receptor . Simulation results show that the rejection-free method is more efficient over wide parameter ranges than a related method that relies on rejection sampling .
[ { "type": "D", "before": "specify and", "after": null, "start_char_pos": 75, "end_char_pos": 86 }, { "type": "R", "before": "involve", "after": "require", "start_char_pos": 115, "end_char_pos": 122 }, { "type": "R", "before": "a", "after": "and evaluate a rejection-free", "start_char_pos": 195, "end_char_pos": 196 }, { "type": "R", "before": ", which", "after": ". The method", "start_char_pos": 296, "end_char_pos": 303 }, { "type": "R", "before": ". The method", "after": ", and it", "start_char_pos": 366, "end_char_pos": 378 }, { "type": "R", "before": "multivalent ligand-receptor interaction systems", "after": "interactions of an m-valent ligand with an n-valent cell-surface receptor", "start_char_pos": 521, "end_char_pos": 568 }, { "type": "R", "before": "reveal insights into ligand-receptor binding kinetics that are not available from previously developed equilibrium models", "after": "show that the rejection-free method is more efficient over wide parameter ranges than a related method that relies on rejection sampling", "start_char_pos": 590, "end_char_pos": 711 } ]
[ 0, 177, 367, 488 ]
0812.4619
3
The system-level dynamics of biomolecular interactions can be difficult to simulate using methods that require explicit specification of a chemical reaction network. Here, we present and evaluate a rejection-free stochastic simulation method for determining the kinetics of multivalent biomolecular interactions . The method has a computational cost independent of reaction network size, and it is based on sampling a set of chemical transformation classes defined by formal rules that characterize the interactions in a system. We apply the method to simulate interactions of an m-valent ligand with an n-valent cell-surface receptor . Simulation results show that the rejection-free method is more efficient over wide parameter ranges than a related method that relies on rejection sampling .
The system-level dynamics of multivalent biomolecular interactions can be simulated using a rule-based kinetic Monte Carlo method in which a rejection sampling strategy is used to generate reaction events. This method becomes inefficient when simulating aggregation processes with large biomolecular complexes. Here, we present a rejection-free method for determining the kinetics of multivalent biomolecular interactions , and we apply the method to simulate simple models for ligand-receptor interactions . Simulation results show that performance of the rejection-free method is equal to or better than that of the rejection method over wide parameter ranges , and the rejection-free method is more efficient for simulating systems in which aggregation is extensive. The rejection-free method reported here should be useful for simulating a variety of systems in which multisite molecular interactions yield large molecular aggregates .
[ { "type": "A", "before": null, "after": "multivalent", "start_char_pos": 29, "end_char_pos": 29 }, { "type": "R", "before": "difficult to simulate using methods that require explicit specification of a chemical reaction network.", "after": "simulated using a rule-based kinetic Monte Carlo method in which a rejection sampling strategy is used to generate reaction events. This method becomes inefficient when simulating aggregation processes with large biomolecular complexes.", "start_char_pos": 63, "end_char_pos": 166 }, { "type": "D", "before": "and evaluate", "after": null, "start_char_pos": 184, "end_char_pos": 196 }, { "type": "D", "before": "stochastic simulation", "after": null, "start_char_pos": 214, "end_char_pos": 235 }, { "type": "R", "before": ". The method has a computational cost independent of reaction network size, and it is based on sampling a set of chemical transformation classes defined by formal rules that characterize the interactions in a system. We", "after": ", and we", "start_char_pos": 313, "end_char_pos": 532 }, { "type": "R", "before": "interactions of an m-valent ligand with an n-valent cell-surface receptor", "after": "simple models for ligand-receptor interactions", "start_char_pos": 562, "end_char_pos": 635 }, { "type": "A", "before": null, "after": "performance of", "start_char_pos": 667, "end_char_pos": 667 }, { "type": "R", "before": "more efficient", "after": "equal to or better than that of the rejection method", "start_char_pos": 697, "end_char_pos": 711 }, { "type": "R", "before": "than a related method that relies on rejection sampling", "after": ", and the rejection-free method is more efficient for simulating systems in which aggregation is extensive. The rejection-free method reported here should be useful for simulating a variety of systems in which multisite molecular interactions yield large molecular aggregates", "start_char_pos": 739, "end_char_pos": 794 } ]
[ 0, 166, 314, 529 ]
0812.4692
1
RNA polymerase (RNAP) is like a mobile molecular workshop that polymerizes a RNA molecule by adding monomeric subunits one by one, while moving step by step on the DNA template itself. Here we develop a theoretical model by incorporating their steric interactions and mechanochemical cycles which explicitly captures the cyclical shape changes of each motor. Using this model, we explain not only the dependence of the average velocity of a RNAP on the externally applied load force, but also predict a {\it nonmotonic} variation of the average velocity on external torque. We also show the effect of steric interactions of the motors on the total rate of RNA synthesis. In principle, our predictions can be tested by carrying out {\it in-vitro} experiments .
RNA polymerase (RNAP) is a mobile molecular workshop that polymerizes a RNA molecule by adding monomeric subunits one by one, while moving step by step on the DNA template itself. Here we develop a theoretical model by incorporating the steric interactions of the RNAPs and their mechanochemical cycles which explicitly captures the cyclical shape changes of each motor. Using this model, we explain not only the dependence of the average velocity of a RNAP on the externally applied load force, but also predict a {\it nonmotonic} variation of the average velocity on external torque. We also show the effect of steric interactions of the motors on the total rate of RNA synthesis. In principle, our predictions can be tested by carrying out {\it in-vitro} experiments which we suggest here .
[ { "type": "D", "before": "like", "after": null, "start_char_pos": 25, "end_char_pos": 29 }, { "type": "R", "before": "their steric interactions and", "after": "the steric interactions of the RNAPs and their", "start_char_pos": 238, "end_char_pos": 267 }, { "type": "A", "before": null, "after": "which we suggest here", "start_char_pos": 758, "end_char_pos": 758 } ]
[ 0, 184, 358, 573, 670 ]
0812.4978
1
In this paper we investigate the problem of optimal dividend distribution in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a drifted Brownian motion modulated by a finite state Markov chain, and model the discount rate as a deterministic function of the current state of the chain. The objective is to maximize the expected cumulative discounted dividend payments until the moment of bankruptcy, which occurs the first time that the cash reserves (the cumulative net revenues minus cumulative dividend payments) hit zero. We show that, if the drift is positive in each regime , it is optimal to adopt a barrier strategy at certain positive regime-dependent levels, and explicitly characterize the value function as the fixed point of a contraction. In the case that the drift is small and negative in some regime , the optimal strategy takes a different form, which we explicitly identify in the case that there are two regimes. We also provide a numerical illustration of the sensitivities of the optimal barriers and the influence of regime-switching.
We investigate the problem of optimal dividend distribution for a company in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a Brownian motion with positive drift that is modulated by a finite state Markov chain, and model the discount rate as a deterministic function of the current state of the chain. In this setting the objective of the company is to maximize the expected cumulative discounted dividend payments until the moment of bankruptcy, which is taken to be the first time that the cash reserves (the cumulative net revenues minus cumulative dividend payments) are zero. We show that, if the drift is positive in each state , it is optimal to adopt a barrier strategy at certain positive regime-dependent levels, and provide an explicit characterization of the value function as the fixed point of a contraction. In the case that the drift is small and negative in one state , the optimal strategy takes a different form, which we explicitly identify if there are two regimes. We also provide a numerical illustration of the sensitivities of the optimal barriers and the influence of regime-switching.
[ { "type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 0, "end_char_pos": 16 }, { "type": "A", "before": null, "after": "for a company", "start_char_pos": 74, "end_char_pos": 74 }, { "type": "R", "before": "drifted Brownian motion", "after": "Brownian motion with positive drift that is", "start_char_pos": 173, "end_char_pos": 196 }, { "type": "R", "before": "The objective", "after": "In this setting the objective of the company", "start_char_pos": 330, "end_char_pos": 343 }, { "type": "R", "before": "occurs", "after": "is taken to be", "start_char_pos": 450, "end_char_pos": 456 }, { "type": "R", "before": "hit", "after": "are", "start_char_pos": 560, "end_char_pos": 563 }, { "type": "R", "before": "regime", "after": "state", "start_char_pos": 617, "end_char_pos": 623 }, { "type": "R", "before": "explicitly characterize", "after": "provide an explicit characterization of", "start_char_pos": 717, "end_char_pos": 740 }, { "type": "R", "before": "some regime", "after": "one state", "start_char_pos": 849, "end_char_pos": 860 }, { "type": "R", "before": "in the case that", "after": "if", "start_char_pos": 937, "end_char_pos": 953 } ]
[ 0, 108, 329, 569, 796, 976 ]
0812.5087
1
Stochastic networks are a plausible representation of the relational information among entities in dynamic systems such as living cells or social communities. While there is a rich literature in estimating a static or temporally invariant network from observation data, little has been done towards estimating time-varying networks from time series of entity attributes. In this paper , we present two new machine learning methods for estimating time-varying networks, which both build on a temporally smoothed l_1-regularized logistic regression formalism that can be cast as standard convex-optimization problem and solved efficiently using generic solvers scalable to large networks. We report promising results on recovering simulated time-varying networks. For real datasets , we reverse engineer the latent sequence of temporally rewiring political network between Senators from the US senate voting records and the latent evolving gene network which contains more than 4000 genes from the life cycle of Drosophila melanogaster from microarray time course.
Stochastic networks are a plausible representation of the relational information among entities in dynamic systems such as living cells or social communities. While there is a rich literature in estimating a static or temporally invariant network from observation data, little has been done toward estimating time-varying networks from time series of entity attributes. In this paper we present two new machine learning methods for estimating time-varying networks, which both build on a temporally smoothed l_1-regularized logistic regression formalism that can be cast as a standard convex-optimization problem and solved efficiently using generic solvers scalable to large networks. We report promising results on recovering simulated time-varying networks. For real data sets , we reverse engineer the latent sequence of temporally rewiring political networks between Senators from the US Senate voting records and the latent evolving regulatory networks underlying 588 genes across the life cycle of Drosophila melanogaster from the microarray time course.
[ { "type": "R", "before": "towards", "after": "toward", "start_char_pos": 291, "end_char_pos": 298 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 385, "end_char_pos": 386 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 577, "end_char_pos": 577 }, { "type": "R", "before": "datasets", "after": "data sets", "start_char_pos": 772, "end_char_pos": 780 }, { "type": "R", "before": "network", "after": "networks", "start_char_pos": 856, "end_char_pos": 863 }, { "type": "R", "before": "senate", "after": "Senate", "start_char_pos": 893, "end_char_pos": 899 }, { "type": "R", "before": "gene network which contains more than 4000 genes from", "after": "regulatory networks underlying 588 genes across", "start_char_pos": 939, "end_char_pos": 992 }, { "type": "D", "before": "Drosophila melanogaster", "after": null, "start_char_pos": 1011, "end_char_pos": 1034 }, { "type": "R", "before": "from", "after": "Drosophila melanogaster from the", "start_char_pos": 1035, "end_char_pos": 1039 } ]
[ 0, 158, 370, 687, 762 ]
0901.0135
1
In a dynamic social or biological environment, the interactions between the underlying actors can undergo large and systematic changes. The latent roles or membership of the actors as determined by these dynamic links will also exhibit rich temporal phenomena, assuming a distinct role at one point while leaning more towards a second role at an another point. To capture this dynamic mixed membership in rewiring networks, we propose a state space mixed membership stochastic blockmodel which embeds an actor into a latent space and track its mixed membership in the latent space across time . We derived efficient approximate learning and inference algorithms for our model, and applied the learned models to analyze a social network between monks , and a rewiring gene interaction network of Drosophila melanogaster collected during its full life cycle. In both cases, our model reveals interesting patterns of the dynamic roles of the actors.
In a dynamic social or biological environment, the interactions between the actors can undergo large and systematic changes. In this paper we propose a model-based approach to analyze what we will refer to as the dynamic tomography of such time-evolving networks. Our approach offers an intuitive but powerful tool to infer the semantic underpinnings of each actor, such as its social roles or biological functions, underlying the observed network topologies. Our model builds on earlier work on a mixed membership stochastic blockmodel for static networks, and the state-space model for tracking object trajectory. It overcomes a major limitation of many current network inference techniques, which assume that each actor plays a unique and invariant role that accounts for all its interactions with other actors; instead, our method models the role of each actor as a time-evolving mixed membership vector that allows actors to behave differently over time and carry out different roles/functions when interacting with different peers, which is closer to reality. We present an efficient algorithm for approximate inference and learning using our model; and we applied our model to analyze a social network between monks (i.e., the Sampson's network), a dynamic email communication network between the Enron employees, and a rewiring gene interaction network of fruit fly collected during its full life cycle. In all cases, our model reveals interesting patterns of the dynamic roles of the actors.
[ { "type": "D", "before": "underlying", "after": null, "start_char_pos": 76, "end_char_pos": 86 }, { "type": "R", "before": "The latent roles or membership of the actors as determined by these dynamic links will also exhibit rich temporal phenomena, assuming a distinct role at one point while leaning more towards a second role at an another point. To capture this dynamic mixed membership in rewiring networks, we propose a state space", "after": "In this paper we propose a model-based approach to analyze what we will refer to as the dynamic tomography of such time-evolving networks. Our approach offers an intuitive but powerful tool to infer the semantic underpinnings of each actor, such as its social roles or biological functions, underlying the observed network topologies. Our model builds on earlier work on a", "start_char_pos": 136, "end_char_pos": 448 }, { "type": "R", "before": "which embeds an actor into a latent space and track its mixed membership in the latent space across time . We derived efficient approximate learning and inference algorithms for our model, and applied the learned models", "after": "for static networks, and the state-space model for tracking object trajectory. It overcomes a major limitation of many current network inference techniques, which assume that each actor plays a unique and invariant role that accounts for all its interactions with other actors; instead, our method models the role of each actor as a time-evolving mixed membership vector that allows actors to behave differently over time and carry out different roles/functions when interacting with different peers, which is closer to reality. We present an efficient algorithm for approximate inference and learning using our model; and we applied our model", "start_char_pos": 488, "end_char_pos": 707 }, { "type": "R", "before": ",", "after": "(i.e., the Sampson's network), a dynamic email communication network between the Enron employees,", "start_char_pos": 750, "end_char_pos": 751 }, { "type": "R", "before": "Drosophila melanogaster", "after": "fruit fly", "start_char_pos": 795, "end_char_pos": 818 }, { "type": "R", "before": "both", "after": "all", "start_char_pos": 860, "end_char_pos": 864 } ]
[ 0, 135, 360, 594, 856 ]
0901.0287
1
Motivation: Recent advances in experimental techniques have generated large amounts of protein interaction data, producing networks containing large numbers of cellular proteins. Mathematically sound and robust foundations are needed for extensive, context-specific exploration of networks, integrating knowledge from different specializations and facilitating biological discovery. Results: Extending our earlier work, we present a theoretical construct, based on random walks, for modelling of information channels between selected points in interaction networks. The software implementation, called ITM Probe, can be used as network exploration and hypothesis forming tool. Through examples involving the yeast pheromone response pathway, we illustrate the versatility and stability of ITM Probe.Availability: www.ncbi.nlm.nih.gov/CBBresearch/qmbp/itm_probe
In our previous publication, a framework for information flow in interaction networks based on random walks with damping was formulated with two fundamental modes: emitting and absorbing. While many other network analysis methods based on random walks or equivalent notions have been developed before and after our earlier work, one can show that they can all be mapped to one of the two modes. In addition to these two fundamental modes, a major strength of our earlier formalism was its accommodation of context-specific directed information flow that yielded plausible and meaningful biological interpretation of protein functions and pathways. However, the directed flow from origins to destinations was induced via a potential function that was heuristic. Here, with a theoretically sound approach called the channel mode, we extend our earlier work for directed information flow. This is achieved by our newly constructed nonheuristic potential function that facilitates a purely probabilistic interpretation of the channel mode. For each network node, the channel mode combines the solutions of emitting and absorbing modes in the same context, producing what we call a channel tensor. The entries of the channel tensor at each node can be interpreted as the amount of flow passing through that node from an origin to a destination. Similarly to our earlier model, the channel mode encompasses damping as a free parameter that controls the locality of information flow. Through examples involving the yeast pheromone response pathway, we illustrate the versatility and stability of our new framework.
[ { "type": "R", "before": "Motivation: Recent advances in experimental techniques have generated large amounts of protein interaction data, producing networks containing large numbers of cellular proteins. Mathematically sound and robust foundations are needed for extensive,", "after": "In our previous publication, a framework for information flow in interaction networks based on random walks with damping was formulated with two fundamental modes: emitting and absorbing. While many other network analysis methods based on random walks or equivalent notions have been developed before and after our earlier work, one can show that they can all be mapped to one of the two modes. In addition to these two fundamental modes, a major strength of our earlier formalism was its accommodation of", "start_char_pos": 0, "end_char_pos": 248 }, { "type": "R", "before": "exploration of networks, integrating knowledge from different specializations and facilitating biological discovery. Results: Extending our earlier work, we present a theoretical construct, based on random walks, for modelling of information channels between selected points in interaction networks. The software implementation, called ITM Probe, can be used as network exploration and hypothesis forming tool.", "after": "directed information flow that yielded plausible and meaningful biological interpretation of protein functions and pathways. However, the directed flow from origins to destinations was induced via a potential function that was heuristic. Here, with a theoretically sound approach called the channel mode, we extend our earlier work for directed information flow. This is achieved by our newly constructed nonheuristic potential function that facilitates a purely probabilistic interpretation of the channel mode. For each network node, the channel mode combines the solutions of emitting and absorbing modes in the same context, producing what we call a channel tensor. The entries of the channel tensor at each node can be interpreted as the amount of flow passing through that node from an origin to a destination. Similarly to our earlier model, the channel mode encompasses damping as a free parameter that controls the locality of information flow.", "start_char_pos": 266, "end_char_pos": 676 }, { "type": "R", "before": "ITM Probe.Availability: www.ncbi.nlm.nih.gov/CBBresearch/qmbp/itm_probe", "after": "our new framework.", "start_char_pos": 789, "end_char_pos": 860 } ]
[ 0, 178, 382, 565, 676, 799 ]
0901.0638
1
This article presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method developed here may also be regarded as providing analytical and numerical bases for doing a more precise form of Cornish-Fisher expansion . Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic and variance gamma .
This article Revised working paper V 1.1 presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations . Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic , variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement may be of use in optimal GPU computations .
[ { "type": "A", "before": null, "after": "Revised working paper V 1.1", "start_char_pos": 13, "end_char_pos": 13 }, { "type": "D", "before": "developed here", "after": null, "start_char_pos": 579, "end_char_pos": 593 }, { "type": "A", "before": null, "after": "both", "start_char_pos": 628, "end_char_pos": 628 }, { "type": "R", "before": "a more precise form of", "after": "more precise", "start_char_pos": 670, "end_char_pos": 692 }, { "type": "R", "before": "expansion", "after": "transformations", "start_char_pos": 708, "end_char_pos": 717 }, { "type": "R", "before": "and variance gamma", "after": ", variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement may be of use in optimal GPU computations", "start_char_pos": 837, "end_char_pos": 855 } ]
[ 0, 170, 272, 440, 567 ]
0901.0638
2
This article Revised working paper V 1.1 presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement may be of use in optimal GPU computations .
This article presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations , and we give example of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the precision characteristics of well-known methods .
[ { "type": "D", "before": "Revised working paper V 1.1", "after": null, "start_char_pos": 13, "end_char_pos": 40 }, { "type": "R", "before": "may be", "after": "is", "start_char_pos": 1126, "end_char_pos": 1132 }, { "type": "A", "before": null, "after": ", and we give example of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the precision characteristics of well-known methods", "start_char_pos": 1168, "end_char_pos": 1168 } ]
[ 0, 197, 299, 467, 594, 730, 875, 1084 ]
0901.0638
3
This article presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations, and we give example of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the precision characteristics of well-known methods .
This article presents differential equations and solution methods for the functions of the form A(z) = F^{-1}(G(z)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations, and we give example of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the precision characteristics of well-known methods . Comparisons are made on Nvidia Quadro and GTX 285 and 480 GPU cards .
[ { "type": "A", "before": null, "after": "Monte Carlo", "start_char_pos": 215, "end_char_pos": 215 }, { "type": "A", "before": null, "after": ". Comparisons are made on Nvidia Quadro and GTX 285 and 480 GPU cards", "start_char_pos": 1316, "end_char_pos": 1316 } ]
[ 0, 169, 272, 440, 567, 703, 848, 1057 ]
0901.0638
4
This article presents differential equations and solution methods for the functions of the form A(z ) = F^{-1}(G( z )), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations {\it , and we give example of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the precision characteristics of well-known methods. Comparisons are made on Nvidia Quadro and GTX 285 and 480 GPU cards .
This article presents differential equations and solution methods for the functions of the form Q(x ) = F^{-1}(G( x )), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations as it avoids the effect of{\it warp divergence , and we give examples of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the best precision characteristics of well-known methods. We also offer models based on a low-probability of warp divergence. Comparisons of new and old forms are made on the Nvidia Quadro 4000, GTX 285 and 480 , and Tesla C2050 GPUs. We argue that in single-precision mode, the change-of-variables approach offers performance competitive with the fastest existing scheme while substantially improving precision, and that in double-precision mode, this approach offers the most GPU-optimal Gaussian quantile yet, and without compromise on precision for Monte Carlo applications, working twice as fast as the CUDA 4 library function with increased precision .
[ { "type": "R", "before": "A(z", "after": "Q(x", "start_char_pos": 96, "end_char_pos": 99 }, { "type": "R", "before": "z", "after": "x", "start_char_pos": 114, "end_char_pos": 115 }, { "type": "D", "before": "The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish-Fisher transformations.", "after": null, "start_char_pos": 582, "end_char_pos": 717 }, { "type": "A", "before": null, "after": "as it avoids the effect of", "start_char_pos": 1151, "end_char_pos": 1151 }, { "type": "A", "before": null, "after": "warp divergence", "start_char_pos": 1156, "end_char_pos": 1156 }, { "type": "R", "before": "example", "after": "examples", "start_char_pos": 1171, "end_char_pos": 1178 }, { "type": "A", "before": null, "after": "best", "start_char_pos": 1289, "end_char_pos": 1289 }, { "type": "R", "before": "Comparisons", "after": "We also offer models based on a low-probability of warp divergence. Comparisons of new and old forms", "start_char_pos": 1339, "end_char_pos": 1350 }, { "type": "R", "before": "Nvidia Quadro and", "after": "the Nvidia Quadro 4000,", "start_char_pos": 1363, "end_char_pos": 1380 }, { "type": "R", "before": "GPU cards", "after": ", and Tesla C2050 GPUs. We argue that in single-precision mode, the change-of-variables approach offers performance competitive with the fastest existing scheme while substantially improving precision, and that in double-precision mode, this approach offers the most GPU-optimal Gaussian quantile yet, and without compromise on precision for Monte Carlo applications, working twice as fast as the CUDA 4 library function with increased precision", "start_char_pos": 1397, "end_char_pos": 1406 } ]
[ 0, 172, 286, 454, 581, 717, 862, 1071, 1338 ]
0901.2080
1
The Dybvig-Ingersoll-Ross (DIR) theorem states that, in arbitrage-free term structure models, long-term yields and forward rates can never fall. We present a unifying approach with a refined version of the DIR theorem, where we identify the reciprocal of the maturity date as the maximal order that long-term rates at earlier dates can dominate long-term rates at later dates. The viability assumption imposed on the market model is significantly weaker than those appearing previously in the literature.
The Dybvig-Ingersoll-Ross (DIR) theorem states that, in arbitrage-free term structure models, long-term yields and forward rates can never fall. We present a refined version of the DIR theorem, where we identify the reciprocal of the maturity date as the maximal order that long-term rates at earlier dates can dominate long-term rates at later dates. The viability assumption imposed on the market model is weaker than those appearing previously in the literature.
[ { "type": "D", "before": "unifying approach with a", "after": null, "start_char_pos": 158, "end_char_pos": 182 }, { "type": "D", "before": "significantly", "after": null, "start_char_pos": 433, "end_char_pos": 446 } ]
[ 0, 144, 376 ]
0901.2377
1
Credit relationships between commercial banks and quoted firms are studied for the structure and its temporal change from the year 1980 to 2005. At each year, the credit network is regarded as a weighted bipartite graph where edges correspond to the relationships and weights refer to the amounts of loans. Reduction in the supply of credit affects firms as debtor, and failure of a firm influences banks as creditor. To quantify the dependency and influence between banks and firms, we propose to define a set of scores of banks and firms, which can be calculated by solving an eigenvalue problem determined by the weight of the credit network. We found that a few largest eigenvalues and corresponding eigenvectors are significant by using a null hypothesis of random bipartite graphs, and that the scores can quantitatively describe the stability or fragility of the credit network during the 25 years.
We present a new approach to understanding credit relationships between commercial banks and quoted firms and with this approach examine the temporal change in the structure of the Japanese credit network from 1980 to 2005. At each year, the credit network is regarded as a weighted bipartite graph where edges correspond to the relationships and weights refer to the amounts of loans. Reduction in the supply of credit affects firms as debtor, and failure of a firm influences banks as creditor. To quantify the dependency and influence between banks and firms, we propose a set of scores of banks and firms, which can be calculated by solving an eigenvalue problem determined by the weight of the credit network. We found that a few largest eigenvalues and corresponding eigenvectors are significant by using a null hypothesis of random bipartite graphs, and that the scores can quantitatively describe the stability or fragility of the credit network during the 25 years.
[ { "type": "R", "before": "Credit", "after": "We present a new approach to understanding credit", "start_char_pos": 0, "end_char_pos": 6 }, { "type": "R", "before": "are studied for the structure and its temporal change from the year", "after": "and with this approach examine the temporal change in the structure of the Japanese credit network from", "start_char_pos": 63, "end_char_pos": 130 }, { "type": "D", "before": "to define", "after": null, "start_char_pos": 495, "end_char_pos": 504 } ]
[ 0, 144, 306, 417, 645 ]
0901.2377
2
We present a new approach to understanding credit relationships between commercial banks and quoted firms and with this approach examine the temporal change in the structure of the Japanese credit network from 1980 to 2005. At each year, the credit network is regarded as a weighted bipartite graph where edges correspond to the relationships and weights refer to the amounts of loans. Reduction in the supply of credit affects firms as debtor, and failure of a firm influences banks as creditor. To quantify the dependency and influence between banks and firms, we propose a set of scores of banks and firms, which can be calculated by solving an eigenvalue problem determined by the weight of the credit network. We found that a few largest eigenvalues and corresponding eigenvectors are significant by using a null hypothesis of random bipartite graphs, and that the scores can quantitatively describe the stability or fragility of the credit network during the 25 years.
We present a new approach to understanding credit relationships between commercial banks and quoted firms , and with this approach , examine the temporal change in the structure of the Japanese credit network from 1980 to 2005. At each year, the credit network is regarded as a weighted bipartite graph where edges correspond to the relationships and weights refer to the amounts of loans. Reduction in the supply of credit affects firms as debtor, and failure of a firm influences banks as creditor. To quantify the dependency and influence between banks and firms, we propose a set of scores of banks and firms, which can be calculated by solving an eigenvalue problem determined by the weight of the credit network. We found that a few largest eigenvalues and corresponding eigenvectors are significant by using a null hypothesis of random bipartite graphs, and that the scores can quantitatively describe the stability or fragility of the credit network during the 25 years.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 106, "end_char_pos": 106 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 130, "end_char_pos": 130 } ]
[ 0, 225, 387, 498, 716 ]
0901.3003
1
We formalize a cumulative interest compliant conservation requirement for pure financial products proposed by Wesseling and van den Bergh to make financial issues relating to these products amenable to mathematical analysis. The formalization is given in a timed extension of tuplix calculus and abstracts from the idiosyncrasies of time measurement . We also use the timed tuplix calculus to show how wanted financial behaviours may profit from using pure financial products .
We develop an algebraic framework for the description and analysis of financial behaviours, that is, behaviours that consist of transferring certain amounts of money at planned times. To a large extent, analysis of financial products amounts to analysis of such behaviours. We formalize the cumulative interest compliant conservation requirement for financial products proposed by Wesseling and van den Bergh by an equation in the framework developed and define a notion of financial product behaviour using this formalization . We also present some properties of financial product behaviours .
[ { "type": "R", "before": "formalize a", "after": "develop an algebraic framework for the description and analysis of financial behaviours, that is, behaviours that consist of transferring certain amounts of money at planned times. To a large extent, analysis of financial products amounts to analysis of such behaviours. We formalize the", "start_char_pos": 3, "end_char_pos": 14 }, { "type": "D", "before": "pure", "after": null, "start_char_pos": 74, "end_char_pos": 78 }, { "type": "R", "before": "to make financial issues relating to these products amenable to mathematical analysis. The formalization is given in a timed extension of tuplix calculus and abstracts from the idiosyncrasies of time measurement", "after": "by an equation in the framework developed and define a notion of financial product behaviour using this formalization", "start_char_pos": 138, "end_char_pos": 349 }, { "type": "R", "before": "use the timed tuplix calculus to show how wanted financial behaviours may profit from using pure financial products", "after": "present some properties of financial product behaviours", "start_char_pos": 360, "end_char_pos": 475 } ]
[ 0, 224, 351 ]
0901.4785
1
Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin processes with the friction coefficient depending on the state of a similar, unobserved, process. Integrating out the latter, we derive the Fokker-Planck equation for the probability distribution of the former. This has the form of diffusion equation with time-dependent diffusion coefficient, resulting in an anomalous diffusion. The diffusion exponent can not be predicted using a simple scaling argument, and anomalous scaling appears as well . The diffusion exponent of the Weiss-Havlin comb model is derived as a special case, and the same exponent holds even for weakly coupled processes . We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems with unobserved dynamical degrees of freedom by means of standard, diffusive Langevin description.
Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin processes with the friction coefficient depending on the state of a similar, unobserved, process. Integrating out the latter, we derive the long time behavior of the mean square displacement. Anomalous diffusion is found where the diffusion exponent can not be predicted using a simple scaling argument, and anomalous scaling appears as well . We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems with unobserved dynamical degrees of freedom by means of standard, diffusive Langevin description.
[ { "type": "R", "before": "Fokker-Planck equation for the probability distribution of the former. This has the form of diffusion equation with time-dependent diffusion coefficient, resulting in an anomalous diffusion. The diffusion", "after": "long time behavior of the mean square displacement. Anomalous diffusion is found where the diffusion", "start_char_pos": 244, "end_char_pos": 448 }, { "type": "D", "before": ". The diffusion exponent of the Weiss-Havlin comb model is derived as a special case, and the same exponent holds even for weakly coupled processes", "after": null, "start_char_pos": 550, "end_char_pos": 697 } ]
[ 0, 201, 314, 434, 551, 699, 798 ]
0901.4785
2
Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin processes with the friction coefficient depending on the state of a similar, unobserved , process. Integrating out the latter, we derive the long time behavior of the mean square displacement. Anomalous diffusion is found where the diffusion exponent can not be predicted using a simple scaling argument, and anomalous scaling appears as well. We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems with unobserved dynamical degrees of freedom by means of standard, diffusive Langevin description .
Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin process whose friction coefficient depends on the state of a similar, unobserved process. Integrating out the latter, we derive the long time behaviour of the mean square displacement. Anomalous diffusion is found . Since the diffusion exponent can not be predicted using a simple scaling argument, anomalous scaling appears as well. We also find that the coupling can lead to ergodic or non-ergodic behaviour of the studied process. We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems coupled with unobserved dynamical degrees of freedom by means of standard, diffusive Langevin descriptions .
[ { "type": "R", "before": "processes with the friction coefficient depending", "after": "process whose friction coefficient depends", "start_char_pos": 104, "end_char_pos": 153 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 192, "end_char_pos": 193 }, { "type": "R", "before": "behavior", "after": "behaviour", "start_char_pos": 255, "end_char_pos": 263 }, { "type": "R", "before": "where", "after": ". Since", "start_char_pos": 326, "end_char_pos": 331 }, { "type": "D", "before": "and", "after": null, "start_char_pos": 409, "end_char_pos": 412 }, { "type": "A", "before": null, "after": "also find that the coupling can lead to ergodic or non-ergodic behaviour of the studied process. We", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "A", "before": null, "after": "coupled", "start_char_pos": 606, "end_char_pos": 606 }, { "type": "R", "before": "description", "after": "descriptions", "start_char_pos": 693, "end_char_pos": 704 } ]
[ 0, 202, 296, 447, 547 ]
0901.4904
1
A nonlinear model has been posited for the global analysis of data pertaining to the semantic network of a complex operating system (free and open-source software). While the distribution of links in the dependency network of this system is scale-free for the intermediate nodes, the richest nodes deviate from this trend, and exhibit a nonlinearity-induced saturation effect. This also distinguishes the two directed networks of incoming and outgoing links from each other. The initial condition for a dynamic model, evolving towards the steady dependency distribution , determines the saturation properties of the mature scale-free network .
A continuum model has been posited for the global analysis of data pertaining to the semantic network of a complex operating system (free and open-source software). While the frequency distributions of links in both the in-directed and out-directed dependency networks of this system follow Zipf's law for the intermediate nodes, the richest nodes , as well as the weakest nodes, deviate from this trend, and exhibit a saturation behaviour arising from the finiteness of semantic possibilities in the network. To preserve uniqueness of operations in the network, the nodes obey an "exclusion principle", with no two nodes being exactly alike in their functionality. The parameters related to finite-size behaviour make a quantitative distinction between the two directed networks of incoming and outgoing links . Dynamic evolution, over two generations of free software releases, shows that the saturation properties of the in-directed and out-directed networks are oppositely affected. For the out-degree distribution, whose top nodes form the foundation of the entire network, the initial condition for a dynamic model, evolving towards a steady scale-free frequency distribution of nodes , determines the finite limit to the number of top nodes that the mature out-directed network can have .
[ { "type": "R", "before": "nonlinear", "after": "continuum", "start_char_pos": 2, "end_char_pos": 11 }, { "type": "R", "before": "distribution", "after": "frequency distributions", "start_char_pos": 175, "end_char_pos": 187 }, { "type": "R", "before": "the dependency network", "after": "both the in-directed and out-directed dependency networks", "start_char_pos": 200, "end_char_pos": 222 }, { "type": "R", "before": "is scale-free", "after": "follow Zipf's law", "start_char_pos": 238, "end_char_pos": 251 }, { "type": "A", "before": null, "after": ", as well as the weakest nodes,", "start_char_pos": 298, "end_char_pos": 298 }, { "type": "R", "before": "nonlinearity-induced saturation effect. This also distinguishes the two", "after": "saturation behaviour arising from the finiteness of semantic possibilities in the network. To preserve uniqueness of operations in the network, the nodes obey an \"exclusion principle\", with no two nodes being exactly alike in their functionality. The parameters related to finite-size behaviour make a quantitative distinction between the two", "start_char_pos": 338, "end_char_pos": 409 }, { "type": "R", "before": "from each other. The", "after": ". Dynamic evolution, over two generations of free software releases, shows that the saturation properties of the in-directed and out-directed networks are oppositely affected. For the out-degree distribution, whose top nodes form the foundation of the entire network, the", "start_char_pos": 459, "end_char_pos": 479 }, { "type": "R", "before": "the steady dependency distribution", "after": "a steady scale-free frequency distribution of nodes", "start_char_pos": 536, "end_char_pos": 570 }, { "type": "R", "before": "saturation properties of the mature scale-free network", "after": "finite limit to the number of top nodes that the mature out-directed network can have", "start_char_pos": 588, "end_char_pos": 642 } ]
[ 0, 164, 377, 475 ]
0901.4904
3
A continuum model has been proposed to fit the data pertaining to the directed networks in free and open-source software. While the degree distributions of links in both the in-directed and out-directed dependency networks follow Zipf's law for the intermediate nodes, the most richly linked nodes , as well as the most poorly linked nodes , deviate from this trend and exhibit finite-size effects. The finite-size parameters make a quantitative distinction between the in-directed and out-directed networks. Dynamic evolution of free software releases shows that the finite-size properties of the in-directed and out-directed networks are opposite in nature. For the out-degree distribution, the initial condition for a dynamic evolution also corresponds to the limiting count of rich nodes that the mature out-directed network can have. The number of nodes contributing out-directed links grows with each passing generation of software release, but this growth ultimately saturates towards a finite value due to the finiteness of semantic possibilities in the network.
We propose a continuum model for the degree distribution of directed networks in free and open-source software. The degree distributions of links in both the in-directed and out-directed dependency networks follow Zipf's law for the intermediate nodes, but the heavily linked nodes and the poorly linked nodes deviate from this trend and exhibit finite-size effects. The finite-size parameters make a quantitative distinction between the in-directed and out-directed networks. For the out-degree distribution, the initial condition for a dynamic evolution corresponds to the limiting count of the most heavily liked nodes that the out-directed network can finally have. The number of nodes contributing out-directed links grows with every generation of software release, but this growth ultimately saturates towards a terminal value due to the finiteness of semantic possibilities in the network.
[ { "type": "R", "before": "A continuum model has been proposed to fit the data pertaining to the", "after": "We propose a continuum model for the degree distribution of", "start_char_pos": 0, "end_char_pos": 69 }, { "type": "R", "before": "While the", "after": "The", "start_char_pos": 122, "end_char_pos": 131 }, { "type": "R", "before": "the most richly linked nodes , as well as the most", "after": "but the heavily linked nodes and the", "start_char_pos": 269, "end_char_pos": 319 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 340, "end_char_pos": 341 }, { "type": "D", "before": "Dynamic evolution of free software releases shows that the finite-size properties of the in-directed and out-directed networks are opposite in nature.", "after": null, "start_char_pos": 509, "end_char_pos": 659 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 739, "end_char_pos": 743 }, { "type": "R", "before": "rich", "after": "the most heavily liked", "start_char_pos": 781, "end_char_pos": 785 }, { "type": "D", "before": "mature", "after": null, "start_char_pos": 801, "end_char_pos": 807 }, { "type": "A", "before": null, "after": "finally", "start_char_pos": 833, "end_char_pos": 833 }, { "type": "R", "before": "each passing", "after": "every", "start_char_pos": 903, "end_char_pos": 915 }, { "type": "R", "before": "finite", "after": "terminal", "start_char_pos": 995, "end_char_pos": 1001 } ]
[ 0, 121, 398, 508, 659, 839 ]
0901.4914
1
In this paper we analyse financial implications of exchangeability and similar properties of finite dimensional random vectors. We show how these properties are reflected in the prices of spread options and the put-call symmetry property in view of the well-known duality principle in option pricing. A particular attention is devoted to the case of asset prices driven by Levy processes. Based on this, concrete semi-static hedging techniques for multiasset barrier options, such as certain weighted barrier spread options, weighted barrier swap options or weighted barrier quanto-swap options are suggested.
In this paper we analyse financial implications of exchangeability and similar properties of finite dimensional random vectors. We show how these properties are reflected in prices of some basket options in view of the well-known put-call symmetry property and the duality principle in option pricing. A particular attention is devoted to the case of asset prices driven by Levy processes. Based on this, concrete semi-static hedging techniques for multi-asset barrier options, such as certain weighted barrier spread options, weighted barrier swap options or weighted barrier quanto-swap options are suggested.
[ { "type": "R", "before": "the prices of spread options and the put-call symmetry property", "after": "prices of some basket options", "start_char_pos": 174, "end_char_pos": 237 }, { "type": "A", "before": null, "after": "put-call symmetry property and the", "start_char_pos": 264, "end_char_pos": 264 }, { "type": "R", "before": "multiasset", "after": "multi-asset", "start_char_pos": 449, "end_char_pos": 459 } ]
[ 0, 127, 301, 389 ]
0902.0504
1
We consider a simple market where a vendor offers multiple variants of a certain product . Preferences of the vendor and of potential buyers are heterogeneous and often antagonistic. Optimization of the joint benefit of the vendor and the buyers turns the toy market into a combinatorial matching problem. We compare optimal solutions found with and without a matchmaker, examine the resulting inequality between the market participants, and study the influence of correlations .
We consider a simple market where a vendor offers multiple variants of a certain product and preferences of both the vendor and potential buyers are heterogeneous and possibly even antagonistic. Optimization of the joint benefit of the vendor and the buyers turns the toy market into a combinatorial matching problem. We compare the optimal solutions found with and without a matchmaker, examine the resulting inequality between the market participants, and study the impact of correlations on the system .
[ { "type": "R", "before": ". Preferences of", "after": "and preferences of both", "start_char_pos": 89, "end_char_pos": 105 }, { "type": "D", "before": "of", "after": null, "start_char_pos": 121, "end_char_pos": 123 }, { "type": "R", "before": "often", "after": "possibly even", "start_char_pos": 163, "end_char_pos": 168 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 317, "end_char_pos": 317 }, { "type": "R", "before": "influence of correlations", "after": "impact of correlations on the system", "start_char_pos": 453, "end_char_pos": 478 } ]
[ 0, 182, 305 ]
0902.0782
1
Wireless ad hoc networks are seldom characterized by one single performance metric, yet the current literature lacks a flexible framework to assist in characterizing the design tradeoffs in such networks. In this work, we address this problem by proposing a new modeling framework for routing in ad hoc networks, which used in conjunction with metaheuristic multiobjective search algorithms, will result in a better understanding of network behavior and performance when multiple criteria are relevant. Our approach is to take a holistic view of network management and control that captures the cross-interactions among interference management techniques implemented at various layers of the protocol stack. We present the Pareto optimal sets for an example sensor network when delay, robustness and energy are considered as performance criteria for the network .
Wireless ad hoc networks are seldom characterized by one single performance metric, yet the current literature lacks a flexible framework to assist in characterizing the design tradeoffs in such networks. In this work, we address this problem by proposing a new modeling framework for routing in ad hoc networks, which used in conjunction with metaheuristic multiobjective search algorithms, will result in a better understanding of network behavior and performance when multiple criteria are relevant. Our approach is to take a holistic view of the network that captures the cross-interactions among interference management techniques implemented at various layers of the protocol stack. The resulting framework is a complex multiobjective optimization problem that can be efficiently solved through existing multiobjective search techniques. In this contribution, we present the Pareto optimal sets for an example sensor network when delay, robustness and energy are considered . The aim of this paper is to present the framework and hence for conciseness purposes, the multiobjective optimization search is not developed herein .
[ { "type": "R", "before": "network management and control", "after": "the network", "start_char_pos": 546, "end_char_pos": 576 }, { "type": "R", "before": "We", "after": "The resulting framework is a complex multiobjective optimization problem that can be efficiently solved through existing multiobjective search techniques. In this contribution, we", "start_char_pos": 708, "end_char_pos": 710 }, { "type": "R", "before": "as performance criteria for the network", "after": ". The aim of this paper is to present the framework and hence for conciseness purposes, the multiobjective optimization search is not developed herein", "start_char_pos": 822, "end_char_pos": 861 } ]
[ 0, 204, 502, 707 ]
0902.0878
1
We present a methodology to extract the backbone of complex networks in which the weight and direction of links, as well as non-topological state variables associated with nodesplay a crucial role. This methodology can be applied in general to networks in which mass or energy is flowing along the links. In this paper, we show how the procedure enables us to address important questions in economics, namely how control and wealth is structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks in the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely lying in the hands of very few important shareholders. This result has previously not been reported , as it is not observable without the kind of network analysis developed here.
We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely , how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks , focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely , that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely , lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.
[ { "type": "R", "before": "in which", "after": "based on", "start_char_pos": 69, "end_char_pos": 77 }, { "type": "R", "before": "non-topological state variables associated with nodesplay a crucial role. This", "after": "on nontopological properties of nodes. We show how the", "start_char_pos": 124, "end_char_pos": 202 }, { "type": "R", "before": "this paper, we show how", "after": "particular,", "start_char_pos": 308, "end_char_pos": 331 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 409, "end_char_pos": 409 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 433, "end_char_pos": 435 }, { "type": "R", "before": "in", "after": ", focusing on", "start_char_pos": 562, "end_char_pos": 564 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 733, "end_char_pos": 733 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 957, "end_char_pos": 957 }, { "type": "R", "before": "This result has", "after": "Interestingly, the exact opposite is observed for European countries. These results have", "start_char_pos": 1013, "end_char_pos": 1028 }, { "type": "R", "before": ", as it is", "after": "as they are", "start_char_pos": 1058, "end_char_pos": 1068 } ]
[ 0, 197, 304, 488, 616, 822, 1012 ]
0902.1328
1
We study the class of Azema-Yor (AY) processes defined from a general semimartingale with a continuous running supremum process. We show that they arise as unique strong solutions of the Bachelier stochastic differential equation which we prove is equivalent to the Drawdown equation. Solutions of the latter have the drawdown property: they always stay above a given function of their past supremum. We then show that any process which satisfies the drawdown property is in fact an AY process. The proofs exploit group structure of the set of AY processes, indexed by functions, which we introduce. Further, we study in detail AY martingales defined from a non-negative local martingale converging to zero at infinity. In particular, we construct AY martingales with a given terminal law and this allows us to rediscover the AY solution to the Skorokhod embedding problem. Finally, we prove new optimal properties of AY martingales relative to concave ordering of terminal laws of martingales .
We study the class of Azema-Yor processes defined from a general semimartingale with a continuous running supremum process. We show that they arise as unique strong solutions of the Bachelier stochastic differential equation which we prove is equivalent to the Drawdown equation. Solutions of the latter have the drawdown property: they always stay above a given function of their past supremum. We then show that any process which satisfies the drawdown property is in fact an Azema-Yor process. The proofs exploit group structure of the set of Azema-Yor processes, indexed by functions, which we introduce. Secondly we study in detail Azema-Yor martingales defined from a non-negative local martingale converging to zero at infinity. We establish relations between Average Value at Risk, Drawdown function, Hardy-Littlewood transform and its generalised inverse. In particular, we construct Azema-Yor martingales with a given terminal law and this allows us to rediscover the Azema-Yor solution to the Skorokhod embedding problem. Finally, we characterise Azema-Yor martingales showing they are optimal relative to the concave ordering of terminal variables among martingales whose maximum dominates stochastically a given benchmark .
[ { "type": "D", "before": "(AY)", "after": null, "start_char_pos": 32, "end_char_pos": 36 }, { "type": "R", "before": "AY", "after": "Azema-Yor", "start_char_pos": 483, "end_char_pos": 485 }, { "type": "R", "before": "AY", "after": "Azema-Yor", "start_char_pos": 544, "end_char_pos": 546 }, { "type": "R", "before": "Further,", "after": "Secondly", "start_char_pos": 600, "end_char_pos": 608 }, { "type": "R", "before": "AY", "after": "Azema-Yor", "start_char_pos": 628, "end_char_pos": 630 }, { "type": "A", "before": null, "after": "We establish relations between Average Value at Risk, Drawdown function, Hardy-Littlewood transform and its generalised inverse.", "start_char_pos": 720, "end_char_pos": 720 }, { "type": "R", "before": "AY", "after": "Azema-Yor", "start_char_pos": 749, "end_char_pos": 751 }, { "type": "R", "before": "AY", "after": "Azema-Yor", "start_char_pos": 827, "end_char_pos": 829 }, { "type": "R", "before": "prove new optimal properties of AY martingales relative to", "after": "characterise Azema-Yor martingales showing they are optimal relative to the", "start_char_pos": 887, "end_char_pos": 945 }, { "type": "R", "before": "laws of martingales", "after": "variables among martingales whose maximum dominates stochastically a given benchmark", "start_char_pos": 975, "end_char_pos": 994 } ]
[ 0, 128, 284, 400, 494, 599, 719, 874 ]
0902.1394
1
This paper addresses the following foundational question: what is the maximum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? As shown in this paper, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). To circumvent the complexity emerging when directly dealing with delay, we express performance in term of a convenient metric, called "stream diffusion metric". We show that it is directly related to the end-to-end minimum delay achievable in a P2P streaming network. In an homogeneous scenario, we derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. In this bound, n-step Fibonacci sequences do emerge, and appear to set the fundamental laws that characterize the optimal operation of chunk-based systems . A further technical contribution of the paper is an advance in the theory of n-step Fibonacci sums for which a prior reference result was missing .
This paper addresses the following foundational question: what is the maximum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? As shown in this paper, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). To circumvent the complexity emerging when directly dealing with delay, we express performance in term of a convenient metric, called "stream diffusion metric". We show that it is directly related to the end-to-end minimum delay achievable in a P2P streaming network. In a homogeneous scenario, we derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. In this bound, k-step Fibonacci sequences do emerge, and appear to set the fundamental laws that characterize the optimal operation of chunk-based systems .
[ { "type": "R", "before": "an", "after": "a", "start_char_pos": 818, "end_char_pos": 820 }, { "type": "R", "before": "n-step", "after": "k-step", "start_char_pos": 1074, "end_char_pos": 1080 }, { "type": "D", "before": ". A further technical contribution of the paper is an advance in the theory of n-step Fibonacci sums for which a prior reference result was missing", "after": null, "start_char_pos": 1214, "end_char_pos": 1361 } ]
[ 0, 216, 546, 707, 814, 1058, 1215 ]
0902.1576
1
Heterogeneity of economic agents is emphasized in a new trend of macroeconomics. Accordingly the new emerging discipline requires one to replace the production function, one of key ideas in the conventional economics, by an alternative which can take an explicit account of distribution of firms' production activities. In this paper we propose a new idea referred to as production copula; a copula is an analytic means for modeling dependence among variables. Such a production copula predicts value added yielded by a firm with given capital and labor in a probabilistic way. It is thereby in sharp contrast to the production function where the productivity of firms is completely deterministic. We demonstrate empirical construction of a production copula using financial data of listed firms in Japan. Analysis of the data shows that there are significant correlations among their capital, labor and value added and confirms that the values added are too widely scattered to be represented by a production function. We study four models including the variants of Frank, Gumbel , Clayton copulas and find that the non-exchangeable Gumbel copula describes the data distribution and correlation very accurately .
Heterogeneity of economic agents is emphasized in a new trend of macroeconomics. Accordingly the new emerging discipline requires one to replace the production function, one of key ideas in the conventional economics, by an alternative which can take an explicit account of distribution of firms' production activities. In this paper we propose a new idea referred to as production copula; a copula is an analytic means for modeling dependence among variables. Such a production copula predicts value added yielded by firms with given capital and labor in a probabilistic way. It is thereby in sharp contrast to the production function where the output of firms is completely deterministic. We demonstrate empirical construction of a production copula using financial data of listed firms in Japan. Analysis of the data shows that there are significant correlations among their capital, labor and value added and confirms that the values added are too widely scattered to be represented by a production function. We employ four models for the production copula, that is, trivariate versions of Frank, Gumbel and survival Clayton and non-exchangeable trivariate Gumbel; the last one works best .
[ { "type": "R", "before": "a firm", "after": "firms", "start_char_pos": 518, "end_char_pos": 524 }, { "type": "R", "before": "productivity", "after": "output", "start_char_pos": 647, "end_char_pos": 659 }, { "type": "R", "before": "study four models including the variants", "after": "employ four models for the production copula, that is, trivariate versions", "start_char_pos": 1023, "end_char_pos": 1063 }, { "type": "R", "before": ", Clayton copulas and find that the", "after": "and survival Clayton and", "start_char_pos": 1081, "end_char_pos": 1116 }, { "type": "R", "before": "Gumbel copula describes the data distribution and correlation very accurately", "after": "trivariate Gumbel; the last one works best", "start_char_pos": 1134, "end_char_pos": 1211 } ]
[ 0, 80, 319, 389, 460, 577, 697, 805, 1019 ]
0902.2479
1
The value function of an optimal stopping problem for a process with Levy jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is non-degenerate and a mild assumption on the singularity of the Levy measure, this paper shows that the value function is smooth in the continuation region for problems with either finite or infinite variation jumps . Moreover , the smooth-fit property is shown via the global regularity of the value function .
The value function of an optimal stopping problem for a process with L\'{e jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e measure, this paper shows that the value function of problems on an unbounded domain with infinite activity jumps is W^{2,1 , the smooth-fit property holds and the value function is C^{2,1 .
[ { "type": "R", "before": "Levy", "after": "L\\'{e", "start_char_pos": 69, "end_char_pos": 73 }, { "type": "R", "before": "non-degenerate", "after": "nondegenerate", "start_char_pos": 198, "end_char_pos": 212 }, { "type": "R", "before": "Levy", "after": "L\\'{e", "start_char_pos": 261, "end_char_pos": 265 }, { "type": "R", "before": "is smooth in the continuation region for problems with either finite or infinite variation jumps . Moreover", "after": "of problems on an unbounded domain with infinite activity jumps is W^{2,1", "start_char_pos": 316, "end_char_pos": 423 }, { "type": "R", "before": "is shown via the global regularity of the value function", "after": "holds and the value function is C^{2,1", "start_char_pos": 450, "end_char_pos": 506 } ]
[ 0, 146, 414 ]
0902.2479
2
The value function of an optimal stopping problem for a process with L\'{e}vy jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of problems on an unbounded domain with infinite activity jumps is W^{2,1}_{p, loc} . As a result, the smooth-fit property holds and the value function is C^{2,1 .
The value function of an optimal stopping problem for a process with L\'{e}vy jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of obstacle problems on an unbounded domain with infinite activity jumps is W^{2,1}_{p, loc} .
[ { "type": "A", "before": null, "after": "obstacle", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "D", "before": ". As a result, the smooth-fit property holds and the value function is C^{2,1", "after": null, "start_char_pos": 408, "end_char_pos": 485 } ]
[ 0, 150 ]
0902.2479
3
The value function of an optimal stopping problem for a process with L\'{e}vy jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of obstacle problems on an unbounded domain with infinite activity jumps is W^{2,1}_{p, loc} .
The value function of an optimal stopping problem for a process with L\'{e}vy jumps is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of obstacle problems on an unbounded domain with finite/infinite variation jumps is in W^{2,1}_{p, loc} . As a consequence, the smooth-fit property holds .
[ { "type": "R", "before": "infinite activity jumps is", "after": "finite/infinite variation jumps is in", "start_char_pos": 372, "end_char_pos": 398 }, { "type": "A", "before": null, "after": ". As a consequence, the smooth-fit property holds", "start_char_pos": 416, "end_char_pos": 416 } ]
[ 0, 150 ]
0902.2479
4
The value function of an optimal stopping problem for a process with L\'{e is known to be a generalized solution of a variational inequality. Assuming the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of obstacle problems on an unbounded domain with finite/infinite variation jumps is in W^{2,1}_{p, loc} . As a consequence, the smooth-fit property holds.
The value function of an optimal stopping problem for jump diffusions is known to be a generalized solution of a variational inequality. Assuming that the diffusion component of the process is nondegenerate and a mild assumption on the singularity of the L\'{e}vy measure, this paper shows that the value function of this optimal stopping problem on an unbounded domain with finite/infinite variation jumps is in W^{2,1}_{p, loc} with p\in(1, \infty) . As a consequence, the smooth-fit property holds.
[ { "type": "R", "before": "a process with L\\'{e", "after": "jump diffusions", "start_char_pos": 54, "end_char_pos": 74 }, { "type": "A", "before": null, "after": "that", "start_char_pos": 151, "end_char_pos": 151 }, { "type": "R", "before": "obstacle problems", "after": "this optimal stopping problem", "start_char_pos": 318, "end_char_pos": 335 }, { "type": "A", "before": null, "after": "with p\\in(1, \\infty)", "start_char_pos": 419, "end_char_pos": 419 } ]
[ 0, 141 ]
0902.2912
1
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method (NSM), extended to unstructured meshes by obtaining jump coefficients from the finite element formulation of the corresponding macroscopic equation. In this manual , we describe how to use the software together with COMSOL Multiphysics 3.4 and Matlab to set up simulations. We provide a detailed account of the code structure and of the available interfaces. This makes modifications and extensions of the code possible. We also give two detailed examples, in which we describe the process of simulating and visualizing two models from the systems biology literature in a step-by-step manner .
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method (NSM), extended to unstructured meshes by obtaining jump coefficients from the finite element formulation of the corresponding macroscopic equation. This manual describes version 1.1 of the software .
[ { "type": "R", "before": "In this manual , we describe how to use the software together with COMSOL Multiphysics 3.4 and Matlab to set up simulations. We provide a detailed account of the code structure and of the available interfaces. This makes modifications and extensions of the code possible. We also give two detailed examples, in which we describe the process of simulating and visualizing two models from the systems biology literature in a step-by-step manner", "after": "This manual describes version 1.1 of the software", "start_char_pos": 475, "end_char_pos": 917 } ]
[ 0, 125, 271, 474, 599, 684, 746 ]
0902.2912
2
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method (NSM) , extended to unstructured meshes by obtaining jump coefficients from the finite element formulation of the corresponding macroscopic equation. This manual describes version 1.1 of the software .
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method , extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.2 of the software . URDME 1.2 includes support for Comsol Multiphysics 4.1, 4.2, 4.3 as well as the previous version 3.5a. Additionally, support for basic SBML has been added along with the possibility to compile in stand-alone mode .
[ { "type": "D", "before": "(NSM)", "after": null, "start_char_pos": 326, "end_char_pos": 331 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 402, "end_char_pos": 405 }, { "type": "R", "before": "1.1", "after": "1.2", "start_char_pos": 506, "end_char_pos": 509 }, { "type": "A", "before": null, "after": ". URDME 1.2 includes support for Comsol Multiphysics 4.1, 4.2, 4.3 as well as the previous version 3.5a. Additionally, support for basic SBML has been added along with the possibility to compile in stand-alone mode", "start_char_pos": 526, "end_char_pos": 526 } ]
[ 0, 125, 271, 475 ]
0902.2912
3
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.2 of the software. URDME 1.2 includes support for Comsol Multiphysics 4.1, 4.2, 4.3 as well as the previous version 3.5a. Additionally, support for basic SBML has been added along with the possibility to compile in stand-alone mode .
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.3 of the software. URDME 1.3 includes support for Comsol Multiphysics 5.x and PDE Toolbox version 1.5 and above .
[ { "type": "R", "before": "1.2", "after": "1.3", "start_char_pos": 497, "end_char_pos": 500 }, { "type": "R", "before": "1.2", "after": "1.3", "start_char_pos": 524, "end_char_pos": 527 }, { "type": "R", "before": "4.1, 4.2, 4.3 as well as the previous version 3.5a. Additionally, support for basic SBML has been added along with the possibility to compile in stand-alone mode", "after": "5.x and PDE Toolbox version 1.5 and above", "start_char_pos": 569, "end_char_pos": 730 } ]
[ 0, 125, 271, 466, 517, 620 ]
0902.2912
4
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.3 of the software. URDME 1.3 includes support for Comsol Multiphysics 5.x and PDE Toolbox version 1.5 and above .
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.4 of the software. Refer to URL for the latest updates .
[ { "type": "R", "before": "1.3", "after": "1.4", "start_char_pos": 497, "end_char_pos": 500 }, { "type": "R", "before": "URDME 1.3 includes support for Comsol Multiphysics 5.x and PDE Toolbox version 1.5 and above", "after": "Refer to URL for the latest updates", "start_char_pos": 518, "end_char_pos": 610 } ]
[ 0, 125, 271, 466, 517 ]
0902.2965
1
In modern portfolio theory, the balancing of expected returns on investments against uncertainties in those returns is aided by the use of utility functions. The Kelly criterion offers another approach, rooted in information theory, that always implies logarithmic utility. The two approaches seem incompatible, too loosely or too tightly constraining investors' risk preferences, from their respective perspectives. This incompatibility goes away by noticing that the model used in both approaches , geometric Brownian motion, is a non-ergodic process, in the sense that ensemble-average returns differ from time-average returns in a single realization . The classic papers on portfolio theoryuse ensemble-average returns. The Kelly-result is obtained by considering time-average returns. The averages differ by a logarithm. In portfolio theory this logarithm can be implemented as a logarithmic utility function. It is important to distinguish between effects of non-ergodicity and genuine utility constraints. For instance, ensemble-average returns depend linearly on leverage. This measure can thus incentivize investors to maximize leverage, which is detrimental to time-average returns and overall market stability . A better understanding of the significance of time-irreversibility and non-ergodicity and the resulting bounds on leverage may help policy makers in reshaping financial risk controls.
In modern portfolio theory, the balancing of expected returns on investments against uncertainties in those returns is aided by the use of utility functions. The Kelly criterion offers another approach, rooted in information theory, that always implies logarithmic utility. The two approaches seem incompatible, too loosely or too tightly constraining investors' risk preferences, from their respective perspectives. The conflict can be understood on the basis that the multiplicative models used in both approaches are non-ergodic which leads to ensemble-average returns differing from time-average returns in single realizations . The classic treatments, from the very beginning of probability theory, use ensemble-averages, whereas the Kelly-result is obtained by considering time-averages. Maximizing the time-average growth rates for an investment defines an optimal leverage, whereas growth rates derived from ensemble-average returns depend linearly on leverage. The latter measure can thus incentivize investors to maximize leverage, which is detrimental to time-average growth and overall market stability . The Sharpe ratio is insensitive to leverage. Its relation to optimal leverage is discussed . A better understanding of the significance of time-irreversibility and non-ergodicity and the resulting bounds on leverage may help policy makers in reshaping financial risk controls.
[ { "type": "R", "before": "This incompatibility goes away by noticing that the model", "after": "The conflict can be understood on the basis that the multiplicative models", "start_char_pos": 417, "end_char_pos": 474 }, { "type": "R", "before": ", geometric Brownian motion, is a", "after": "are", "start_char_pos": 499, "end_char_pos": 532 }, { "type": "R", "before": "process, in the sense that", "after": "which leads to", "start_char_pos": 545, "end_char_pos": 571 }, { "type": "R", "before": "differ", "after": "differing", "start_char_pos": 597, "end_char_pos": 603 }, { "type": "R", "before": "a single realization", "after": "single realizations", "start_char_pos": 633, "end_char_pos": 653 }, { "type": "R", "before": "papers on portfolio theoryuse ensemble-average returns. The", "after": "treatments, from the very beginning of probability theory, use ensemble-averages, whereas the", "start_char_pos": 668, "end_char_pos": 727 }, { "type": "A", "before": null, "after": "time-averages. Maximizing the", "start_char_pos": 768, "end_char_pos": 768 }, { "type": "R", "before": "returns. The averages differ by a logarithm. In portfolio theory this logarithm can be implemented as a logarithmic utility function. It is important to distinguish between effects of non-ergodicity and genuine utility constraints. For instance,", "after": "growth rates for an investment defines an optimal leverage, whereas growth rates derived from", "start_char_pos": 782, "end_char_pos": 1027 }, { "type": "R", "before": "This", "after": "The latter", "start_char_pos": 1082, "end_char_pos": 1086 }, { "type": "R", "before": "returns", "after": "growth", "start_char_pos": 1185, "end_char_pos": 1192 }, { "type": "A", "before": null, "after": ". The Sharpe ratio is insensitive to leverage. Its relation to optimal leverage is discussed", "start_char_pos": 1222, "end_char_pos": 1222 } ]
[ 0, 157, 273, 416, 655, 723, 790, 826, 915, 1013, 1081, 1224 ]
0902.3301
1
Molecular motors are biogenic force generators acting in the nanometer range . They convert chemical energy into mechanical work and move along filamentous structures. In this research, we will discuss the velocity of molecular motors, in framework of a mechanochemical network theory. Our network modelis based on the distinct mechanochemical states of molecular motors. It can be regarded as a generalization of the one used by Steffen Liepelt and Reinhard Lipowsky (PRL 98, 258102 (2007)) . The method used in this research is similar as the one used by Michael E. Fisher and Anatoly B. Kolomeisky (PNAS(2001) 98(14) P7748-7753) . Generally, the formulation of the velocity of molecular motors can be obtained .
Molecular motors are single macromolecules that generate forces at the piconewton range and nanometer scale . They convert chemical energy into mechanical work by moving along filamentous structures. In this paper, we study the velocity of two-head molecular motors in the framework of a mechanochemical network theory. The network model, a generalization of the recently work of Liepelt and Lipowsky (PRL 98, 258102 (2007)) , is based on the discrete mechanochemical states of a molecular motor with multiple cycles. By generalizing the mathematical method developed by Fisher and Kolomeisky for single cycle motor (PNAS(2001) 98(14) P7748-7753) , we are able to obtain an explicit formula for the velocity of a molecular motor .
[ { "type": "R", "before": "biogenic force generators acting in the nanometer range", "after": "single macromolecules that generate forces at the piconewton range and nanometer scale", "start_char_pos": 21, "end_char_pos": 76 }, { "type": "R", "before": "and move", "after": "by moving", "start_char_pos": 129, "end_char_pos": 137 }, { "type": "R", "before": "research, we will discuss", "after": "paper, we study", "start_char_pos": 176, "end_char_pos": 201 }, { "type": "R", "before": "molecular motors, in", "after": "two-head molecular motors in the", "start_char_pos": 218, "end_char_pos": 238 }, { "type": "R", "before": "Our network modelis based on the distinct mechanochemical states of molecular motors. It can be regarded as", "after": "The network model,", "start_char_pos": 286, "end_char_pos": 393 }, { "type": "R", "before": "one used by Steffen Liepelt and Reinhard", "after": "recently work of Liepelt and", "start_char_pos": 418, "end_char_pos": 458 }, { "type": "R", "before": ". The method used in this research is similar as the one used by Michael E. Fisher and Anatoly B. Kolomeisky", "after": ", is based on the discrete mechanochemical states of a molecular motor with multiple cycles. By generalizing the mathematical method developed by Fisher and Kolomeisky for single cycle motor", "start_char_pos": 492, "end_char_pos": 600 }, { "type": "R", "before": ". Generally, the formulation of the velocity of molecular motors can be obtained", "after": ", we are able to obtain an explicit formula for the velocity of a molecular motor", "start_char_pos": 632, "end_char_pos": 712 } ]
[ 0, 78, 167, 285, 371, 567, 633 ]
0902.3413
1
The biogenesis of lipid droplets (LD) in the yeast Saccharomyces cerevisiae was theoretically investigated on basis of a biophysical model. In accordance with the standard model of LD formation, we assumed that neutral lipids oil-out between the membrane leaflets of the endoplasmic reticulum (ER), resulting in LDs that bud-off when a critical size is reached. Relying on this mechanism, we have calculated the change in the elastic free energy of the membrane caused by nascent LDs. We found a gradual de-mixing of lipids in the membrane leaflet that goes along with an increase in surface curvature at the site of LD formation. As a result of de-mixing , the phospholipid monolayer was able to gain energy during LD growth . This suggested that the formation of curved interfaces , necessary for the creation of a lipid droplet, were driven or supported by the process of lipid de-mixing. In the case of Saccharomyces cerevisiae LDs eventually detached from the ER at a critical diameter of about 30-50 nm. We concluded that if the standard model of LD formation is valid, LD biogenesis is a two step process. Small LD are produced from the ER, which subsequently ripe within the cytosol through a series of fusions.
The biogenesis of lipid droplets (LD) in the yeast Saccharomyces cerevisiae was theoretically investigated on basis of a biophysical model. In accordance with the prevailing model of LD formation, we assumed that neutral lipids oil-out between the membrane leaflets of the endoplasmic reticulum (ER), resulting in LD that bud-off when a critical size is reached. Mathematically, LD were modeled as spherical protuberances in an otherwise planar ER membrane. We estimated the local phospholipid composition, and calculated the change in elastic free energy of the membrane caused by nascent LD. Based on this model calculation, we found a gradual demixing of lipids in the membrane leaflet that goes along with an increase in surface curvature at the site of LD formation. During demixing , the phospholipid monolayer was able to gain energy during LD growth , which suggested that the formation of curved interfaces was supported by or even driven by lipid demixing. In addition, we show that demixing is thermodynamically necessary as LD cannot bud-off otherwise. In the case of Saccharomyces cerevisiae our model predicts a LD bud-off diameter of about 13 nm. This diameter is far below the experimentally determined size of typical yeast LD. Thus, we concluded that if the standard model of LD formation is valid, LD biogenesis is a two step process. Small LD are produced from the ER, which subsequently ripe within the cytosol through a series of fusions.
[ { "type": "R", "before": "standard", "after": "prevailing", "start_char_pos": 163, "end_char_pos": 171 }, { "type": "R", "before": "LDs", "after": "LD", "start_char_pos": 312, "end_char_pos": 315 }, { "type": "R", "before": "Relying on this mechanism, we have", "after": "Mathematically, LD were modeled as spherical protuberances in an otherwise planar ER membrane. We estimated the local phospholipid composition, and", "start_char_pos": 362, "end_char_pos": 396 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 422, "end_char_pos": 425 }, { "type": "R", "before": "LDs. We", "after": "LD. Based on this model calculation, we", "start_char_pos": 480, "end_char_pos": 487 }, { "type": "R", "before": "de-mixing", "after": "demixing", "start_char_pos": 504, "end_char_pos": 513 }, { "type": "R", "before": "As a result of de-mixing", "after": "During demixing", "start_char_pos": 631, "end_char_pos": 655 }, { "type": "R", "before": ". This", "after": ", which", "start_char_pos": 726, "end_char_pos": 732 }, { "type": "R", "before": ", necessary for the creation of a lipid droplet, were driven or supported by the process of lipid de-mixing. In", "after": "was supported by or even driven by lipid demixing. In addition, we show that demixing is thermodynamically necessary as LD cannot bud-off otherwise. In", "start_char_pos": 783, "end_char_pos": 894 }, { "type": "R", "before": "LDs eventually detached from the ER at a critical", "after": "our model predicts a LD bud-off", "start_char_pos": 932, "end_char_pos": 981 }, { "type": "R", "before": "30-50 nm. We", "after": "13 nm. This diameter is far below the experimentally determined size of typical yeast LD. Thus, we", "start_char_pos": 1000, "end_char_pos": 1012 } ]
[ 0, 139, 361, 484, 630, 727, 891, 1009, 1112 ]
0903.0096
1
We provide a simple and accurate analytical model for infrastructure IEEE 802.11 WLANs. Our model applies if the cell radius, R, is much smaller than the distance , R_{cs} , up to which carrier sensing is effective. The condition R_{cs} >> R is likely to hold in a dense deployment of Access Points (APs) where, for every client or station (STA), there is an AP very close to the STA such that the STA can associate with the AP at a high Physical (PHY) rate. We develop a scalable cell level model for such WLANs with saturated AP and STA queues as well as for TCP-controlled long file transfers . The accuracy of our model is demonstrated by comparison with ns-2 simulations. We also demonstrate how our analytical model could be applied in conjunction with a Learning Automata (LA) algorithm for optimal channel assignment. Based on the insights provided by our analytical model, we also propose a simple decentralized algorithm which provides static channel assignments that are \textit{Nash equilibria in pure strategies for the objective of maximizing \textit{normalized network throughput in as many steps as there are channels. In contrast to prior work, our approach to channel assignment is based on the \textit{throughput metric .
We provide a simple and accurate analytical model for multi-cell infrastructure IEEE 802.11 WLANs. Our model applies if the cell radius, R, is much smaller than the carrier sensing range , R_{cs} . We argue that, the condition R_{cs} >> R is likely to hold in a dense deployment of Access Points (APs) where, for every client or station (STA), there is an AP very close to the STA such that the STA can associate with the AP at a high physical rate. We develop a scalable cell level model for such WLANs with saturated AP and STA queues as well as for TCP-controlled long file downloads . The accuracy of our model is demonstrated by comparison with ns-2 simulations. We also demonstrate how our analytical model could be applied in conjunction with a Learning Automata (LA) algorithm for optimal channel assignment. Based on the insights provided by our analytical model, we propose a simple decentralized algorithm which provides static channel assignments that are \textit{Nash equilibria in pure strategies for the objective of maximizing \textit{ normalized network throughput. Our channel assignment algorithm requires neither any explicit knowledge of the topology nor any message passing, and provides assignments in only as many steps as there are channels. In contrast to prior work, our approach to channel assignment is based on the \textit{throughput metric .
[ { "type": "R", "before": "infrastructure", "after": "multi-cell infrastructure", "start_char_pos": 54, "end_char_pos": 68 }, { "type": "R", "before": "distance", "after": "carrier sensing range", "start_char_pos": 154, "end_char_pos": 162 }, { "type": "R", "before": ", up to which carrier sensing is effective. The", "after": ". We argue that, the", "start_char_pos": 172, "end_char_pos": 219 }, { "type": "R", "before": "Physical (PHY)", "after": "physical", "start_char_pos": 438, "end_char_pos": 452 }, { "type": "R", "before": "cell level", "after": "cell level", "start_char_pos": 481, "end_char_pos": 491 }, { "type": "R", "before": "transfers", "after": "downloads", "start_char_pos": 586, "end_char_pos": 595 }, { "type": "R", "before": "ns-2", "after": "ns-2", "start_char_pos": 659, "end_char_pos": 663 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 885, "end_char_pos": 889 }, { "type": "R", "before": "static", "after": "static", "start_char_pos": 946, "end_char_pos": 952 }, { "type": "R", "before": "Nash equilibria in pure strategies", "after": "Nash equilibria in pure strategies", "start_char_pos": 990, "end_char_pos": 1024 }, { "type": "D", "before": "normalized network throughput", "after": null, "start_char_pos": 1065, "end_char_pos": 1094 }, { "type": "R", "before": "in", "after": "normalized network throughput. Our channel assignment algorithm requires neither any explicit knowledge of the topology nor any message passing, and provides assignments in only", "start_char_pos": 1095, "end_char_pos": 1097 }, { "type": "R", "before": "throughput metric", "after": "throughput metric", "start_char_pos": 1221, "end_char_pos": 1238 } ]
[ 0, 87, 215, 458, 597, 676, 825, 1134 ]
0903.0680
1
We propose a new generic framework for option price modelling, using quantum neural computation formalism. Briefly, when we apply a classical nonlinear neural-network learning to a quantum linear Schr\"odinger equation, as a result we get a nonlinear Schr\"odinger equation (NLS), performing as a quantum stochastic filter. In this paper, we present a bidirectional quantum associative memory model for the Black--Scholes--like option price evolution, consisting of a pair of coupled NLS equations, one governing the stochastic volatility and the other governing the option price, both URLanizing in an adaptive `market heat potential', trained by continuous Hebbian learning. This stiff pair of NLS equations is numerically solved using the method of lines with adaptive step-size integrator.
We propose a new cognitive framework for option price modelling, using quantum neural computation formalism. Briefly, when we apply a classical nonlinear neural-network learning to a linear quantum Schr\"odinger equation, as a result we get a nonlinear Schr\"odinger equation (NLS), performing as a quantum stochastic filter. In this paper, we present a bidirectional quantum associative memory model for the Black--Scholes--like option price evolution, consisting of a pair of coupled NLS equations, one governing the stochastic volatility and the other governing the option price, both URLanizing in an adaptive `market heat potential', trained by continuous Hebbian learning. This stiff pair of NLS equations is numerically solved using the method of lines with adaptive step-size integrator. Keywords: Option price modelling, Quantum neural computation, nonlinear Schr\"odinger equations, leverage effect, bidirectional associative memory
[ { "type": "R", "before": "generic", "after": "cognitive", "start_char_pos": 17, "end_char_pos": 24 }, { "type": "R", "before": "quantum linear", "after": "linear quantum", "start_char_pos": 181, "end_char_pos": 195 }, { "type": "A", "before": null, "after": "Keywords: Option price modelling, Quantum neural computation, nonlinear Schr\\\"odinger equations, leverage effect, bidirectional associative memory", "start_char_pos": 794, "end_char_pos": 794 } ]
[ 0, 106, 323, 676 ]
0903.2243
1
After reviewing the strengthened hypotheses of the Shannon-MacMillan-Breiman Theorem, versus the standard statement of the Noiseless Coding Theorem, we state and prove a similar result for relative information , otherwise known as information gain. If the gain results from receiving a side message, this gain, when averaged over the ensemble of possible side messages, is precisely the pragmatic information defined in Weinberger ( 2002 ). The relative information result proven herein can be used to extend the information theoretic analysis of the Kelly Criterion, and its generalization, the horse race, an analysis of securities market trading strategies presented in Cover and Thomas (1992). We show, in particular, that their results for statistically independent horse races also apply to a series of races where the stochastic process of winning horses, payoffs, and strategies depend on some ergodic process, including, but not limited to the history of previous races. Also, if the bettor is receiving messages (side information) about the probability distribution of winners, the doubling rate of the bettor's winnings can be interpreted as the pragmatic information of the messages. Both the theorem proven herein and the application to trading make a compelling case for Weinberger's definition of pragmatic information .
This paper is part of an ongoing investigation of "pragmatic information", defined in Weinberger ( 2002 ) as "the amount of information actually used in making a decision". Because a study of information rates led to the Noiseless and Noisy Coding Theorems, two of the most important results of Shannon's theory, we begin the paper by defining a pragmatic information rate, showing that all of the relevant limits make sense, and interpreting them as the improvement in compression obtained from using the correct distribution of transmitted symbols. The first of two applications of the theory extends the information theoretic analysis of the Kelly Criterion, and its generalization, the horse race, to a series of races where the stochastic process of winning horses, payoffs, and strategies depend on some stationary process, including, but not limited to the history of previous races. If the bettor is receiving messages (side information) about the probability distribution of winners, the doubling rate of the bettor's winnings is bounded by the pragmatic information of the messages. A second application is to the question of market efficiency. An efficient market is, by definition, a market in which the pragmatic information of the "tradable past" with respect to current prices is zero. Under this definition, markets whose returns are characterized by a GARCH(1,1) process cannot be efficient. Finally, a pragmatic informational analogue to Shannon's Noisy Coding Theorem suggests that a cause of market inefficiency is that the underlying fundamentals are changing so fast that the price discovery mechanism simply cannot keep up. This may happen most readily in the run-up to a financial bubble, where investors' willful ignorance degrade the information processing capabilities of the market .
[ { "type": "R", "before": "After reviewing the strengthened hypotheses of the Shannon-MacMillan-Breiman Theorem, versus the standard statement of the Noiseless Coding Theorem, we state and prove a similar result for relative information , otherwise known as information gain. If the gain results from receiving a side message, this gain, when averaged over the ensemble of possible side messages, is precisely the pragmatic information defined in Weinberger (", "after": "This paper is part of an ongoing investigation of \"pragmatic information\", defined in Weinberger (", "start_char_pos": 0, "end_char_pos": 432 }, { "type": "R", "before": "). The relative information result proven herein can be used to extend the", "after": ") as \"the amount of information actually used in making a decision\". Because a study of information rates led to the Noiseless and Noisy Coding Theorems, two of the most important results of Shannon's theory, we begin the paper by defining a pragmatic information rate, showing that all of the relevant limits make sense, and interpreting them as the improvement in compression obtained from using the correct distribution of transmitted symbols. The first of two applications of the theory extends the", "start_char_pos": 438, "end_char_pos": 512 }, { "type": "D", "before": "an analysis of securities market trading strategies presented in Cover and Thomas (1992). We show, in particular, that their results for statistically independent horse races also apply", "after": null, "start_char_pos": 608, "end_char_pos": 793 }, { "type": "R", "before": "ergodic", "after": "stationary", "start_char_pos": 902, "end_char_pos": 909 }, { "type": "R", "before": "Also, if", "after": "If", "start_char_pos": 980, "end_char_pos": 988 }, { "type": "R", "before": "can be interpreted as", "after": "is bounded by", "start_char_pos": 1131, "end_char_pos": 1152 }, { "type": "R", "before": "Both the theorem proven herein and the application to trading make a compelling case for Weinberger's definition of pragmatic information", "after": "A second application is to the question of market efficiency. An efficient market is, by definition, a market in which the pragmatic information of the \"tradable past\" with respect to current prices is zero. Under this definition, markets whose returns are characterized by a GARCH(1,1) process cannot be efficient. Finally, a pragmatic informational analogue to Shannon's Noisy Coding Theorem suggests that a cause of market inefficiency is that the underlying fundamentals are changing so fast that the price discovery mechanism simply cannot keep up. This may happen most readily in the run-up to a financial bubble, where investors' willful ignorance degrade the information processing capabilities of the market", "start_char_pos": 1196, "end_char_pos": 1333 } ]
[ 0, 248, 440, 697, 979, 1195 ]
0903.2352
1
This paper investigates the limit behavior of Markov Decision Processes (MDPs) made of independent particles evolving in a common environment, when the number of particles goes to infinity. In the finite horizon case or with a discounted cost and an infinite horizon, we show that when the number of particles becomes large, the optimal cost of the system converges almost surely to the optimal cost of a discrete deterministic system (the " optimal mean field " ). Convergence also holds for optimal policies. We further provide insights on the speed of convergence by proving several central limits theorems for the cost and the state of the Markov decision process with explicit formulas for the variance of the limit Gaussian laws. Then, our framework is applied to a brokering problem in grid computing. The optimal policy for the limit deterministic system is computed explicitly. Several simulations with growing numbers of processors are reported. They compare the performance of the optimal policy of the limit system used in the finite case with classical policies (such as Join the Shortest Queue) by measuring its asymptotic gain as well as the threshold above which it starts outperforming classical policies.
This paper investigates the limit behavior of Markov Decision Processes (MDPs) made of independent particles evolving in a common environment, when the number of particles goes to infinity. In the finite horizon case or with a discounted cost and an infinite horizon, we show that when the number of particles becomes large, the optimal cost of the system converges almost surely to the optimal cost of a discrete deterministic system (the `` optimal mean field '' ). Convergence also holds for optimal policies. We further provide insights on the speed of convergence by proving several central limits theorems for the cost and the state of the Markov decision process with explicit formulas for the variance of the limit Gaussian laws. Then, our framework is applied to a brokering problem in grid computing. The optimal policy for the limit deterministic system is computed explicitly. Several simulations with growing numbers of processors are reported. They compare the performance of the optimal policy of the limit system used in the finite case with classical policies (such as Join the Shortest Queue) by measuring its asymptotic gain as well as the threshold above which it starts outperforming classical policies.
[ { "type": "R", "before": "\"", "after": "``", "start_char_pos": 440, "end_char_pos": 441 }, { "type": "R", "before": "\"", "after": "''", "start_char_pos": 461, "end_char_pos": 462 } ]
[ 0, 189, 510, 735, 808, 886, 955 ]
0903.2608
1
Synthesis of protein molecules in a cell are carried out by ribosomes. A ribosome can be regarded as a molecular motor which utilizes the input chemical energy to move on a messenger RNA (mRNA) track that also serves as a template for the polymerization of the corresponding protein. The forward movement, however, is characterized by an alternating sequence of translocation and pause. Using a quantitative model, which captures the mechanochemical cycle of an individual ribosome, we derive an {\it exact} analytical expression for the distribution of its dwell times at the successive positions on the mRNA track. Inverse of the average dwell time satisfies a `` Michaelis-Menten-like '' equation and is consistent with the general formula for the average velocity of a molecular motor with an unbranched mechano-chemical cycle. Extending this formula appropriately, we also derive the exact force-velocity relation for a ribosome. Often many ribosomes simultaneously move on the same mRNA track, while each synthesizes a copy of the same protein. We extend the model of a single ribosome by incorporating steric exclusion of different individuals on the same track. We draw the phase diagram of this model of ribosome traffic in 3-dimensional spaces spanned by experimentally controllable parameters. We suggest new experimental tests of our theoretical predictions.
Synthesis of protein molecules in a cell are carried out by ribosomes. A ribosome can be regarded as a molecular motor which utilizes the input chemical energy to move on a messenger RNA (mRNA) track that also serves as a template for the polymerization of the corresponding protein. The forward movement, however, is characterized by an alternating sequence of translocation and pause. Using a quantitative model, which captures the mechanochemical cycle of an individual ribosome, we derive an {\it exact} analytical expression for the distribution of its dwell times at the successive positions on the mRNA track. Inverse of the average dwell time satisfies a " Michaelis-Menten-like " equation and is consistent with the general formula for the average velocity of a molecular motor with an unbranched mechano-chemical cycle. Extending this formula appropriately, we also derive the exact force-velocity relation for a ribosome. Often many ribosomes simultaneously move on the same mRNA track, while each synthesizes a copy of the same protein. We extend the model of a single ribosome by incorporating steric exclusion of different individuals on the same track. We draw the phase diagram of this model of ribosome traffic in 3-dimensional spaces spanned by experimentally controllable parameters. We suggest new experimental tests of our theoretical predictions.
[ { "type": "R", "before": "``", "after": "\"", "start_char_pos": 663, "end_char_pos": 665 }, { "type": "R", "before": "''", "after": "\"", "start_char_pos": 688, "end_char_pos": 690 } ]
[ 0, 70, 283, 386, 616, 831, 934, 1050, 1169, 1304 ]
0903.2608
2
Synthesis of protein molecules in a cell are carried out by ribosomes. A ribosome can be regarded as a molecular motor which utilizes the input chemical energy to move on a messenger RNA (mRNA) track that also serves as a template for the polymerization of the corresponding protein. The forward movement, however, is characterized by an alternating sequence of translocation and pause. Using a quantitative model, which captures the mechanochemical cycle of an individual ribosome, we derive an {\it exact} analytical expression for the distribution of its dwell times at the successive positions on the mRNA track. Inverse of the average dwell time satisfies a " Michaelis-Menten-like " equation and is consistent with the general formula for the average velocity of a molecular motor with an unbranched mechano-chemical cycle. Extending this formula appropriately, we also derive the exact force-velocity relation for a ribosome. Often many ribosomes simultaneously move on the same mRNA track, while each synthesizes a copy of the same protein. We extend the model of a single ribosome by incorporating steric exclusion of different individuals on the same track. We draw the phase diagram of this model of ribosome traffic in 3-dimensional spaces spanned by experimentally controllable parameters. We suggest new experimental tests of our theoretical predictions.
Synthesis of protein molecules in a cell are carried out by ribosomes. A ribosome can be regarded as a molecular motor which utilizes the input chemical energy to move on a messenger RNA (mRNA) track that also serves as a template for the polymerization of the corresponding protein. The forward movement, however, is characterized by an alternating sequence of translocation and pause. Using a quantitative model, which captures the mechanochemical cycle of an individual ribosome, we derive an {\it exact} analytical expression for the distribution of its dwell times at the successive positions on the mRNA track. Inverse of the average dwell time satisfies a `` Michaelis-Menten-like '' equation and is consistent with the general formula for the average velocity of a molecular motor with an unbranched mechano-chemical cycle. Extending this formula appropriately, we also derive the exact force-velocity relation for a ribosome. Often many ribosomes simultaneously move on the same mRNA track, while each synthesizes a copy of the same protein. We extend the model of a single ribosome by incorporating steric exclusion of different individuals on the same track. We draw the phase diagram of this model of ribosome traffic in 3-dimensional spaces spanned by experimentally controllable parameters. We suggest new experimental tests of our theoretical predictions.
[ { "type": "R", "before": "\"", "after": "``", "start_char_pos": 663, "end_char_pos": 664 }, { "type": "R", "before": "\"", "after": "''", "start_char_pos": 687, "end_char_pos": 688 } ]
[ 0, 70, 283, 386, 616, 829, 932, 1048, 1167, 1302 ]
0903.2883
1
We investigate the distribution of flavonoid , a major category of plant secondary metabolites, across species. Flavonoid is known to show high species specificity, and was once considered as a chemical marker to understand adaptive evolution and characterization of URLanisms. We investigate the distribution among species using bipartite networks, and find that two heterogeneous distributions are conserved between several families: the power law distributions of the number of flavonoids in a species and the number of shared species of a particular flavonoid. In order to explain the possible origin of the heterogeneity, we propose a simple model with, essentially, a single parameter. As a result, we show that two respective power-law statistics emerge from simple evolutionary mechanisms based on a multiplicative process. These findings provide insights into the evolution of metabolite diversity and characterization of URLanisms that defy genome sequence analysis for different reasons.
We investigate the distribution of flavonoids , a major category of plant secondary metabolites, across species. Flavonoids are known to show high species specificity, and were once considered as chemical markers for understanding adaptive evolution and characterization of URLanisms. We investigate the distribution among species using bipartite networks, and find that two heterogeneous distributions are conserved among several families: the power-law distributions of the number of flavonoids in a species and the number of shared species of a particular flavonoid. In order to explain the possible origin of the heterogeneity, we propose a simple model with, essentially, a single parameter. As a result, we show that two respective power-law statistics emerge from simple evolutionary mechanisms based on a multiplicative process. These findings provide insights into the evolution of metabolite diversity and characterization of URLanisms that defy genome sequence analysis for different reasons.
[ { "type": "R", "before": "flavonoid", "after": "flavonoids", "start_char_pos": 35, "end_char_pos": 44 }, { "type": "R", "before": "Flavonoid is", "after": "Flavonoids are", "start_char_pos": 112, "end_char_pos": 124 }, { "type": "R", "before": "was", "after": "were", "start_char_pos": 169, "end_char_pos": 172 }, { "type": "R", "before": "a chemical marker to understand", "after": "chemical markers for understanding", "start_char_pos": 192, "end_char_pos": 223 }, { "type": "R", "before": "between", "after": "among", "start_char_pos": 410, "end_char_pos": 417 }, { "type": "R", "before": "power law", "after": "power-law", "start_char_pos": 440, "end_char_pos": 449 } ]
[ 0, 111, 277, 564, 691, 831 ]
0903.3736
1
We provide an axiomatic foundation for the representation of numeraire-invariant preferences of agents acting in a financial market. In a static environment, the simple axioms turn out to be equivalent to the following choice rule: the agent prefers one outcome over another if and only if the expected (under the agent's subjective probability) relative rate of return of the latter outcome with respect to the former is nonpositive. With the addition of a transitivity requirement, this last preference relation is extended to expected logarithmic utility maximization . We also discuss the previous in a dynamic environment, where consumption streams are the objects of choice. There, a novel result concerning a canonical representation of optional measures with unit mass enables one to explicitly solve the investment-consumption problem by completely separating the two aspects of investment and consumption. Finally, we give an application to the problem of optimal numeraire investment with a random-time horizon .
We provide an axiomatic foundation for the representation of numeraire-invariant preferences of economic agents acting in a financial market. In a static environment, the simple axioms turn out to be equivalent to the following choice rule: the agent prefers one outcome over another if and only if the expected (under the agent's subjective probability) relative rate of return of the latter outcome with respect to the former is nonpositive. With the addition of a transitivity requirement, this last preference relation has an extension that can be numerically represented by expected logarithmic utility . We also treat the case of a dynamic environment, where consumption streams are the objects of choice. There, a novel result concerning a canonical representation of unit-mass optional measures enables to explicitly solve the investment-consumption problem by separating the two aspects of investment and consumption. Finally, we give an application to the problem of optimal numeraire investment with a random time-horizon .
[ { "type": "A", "before": null, "after": "economic", "start_char_pos": 96, "end_char_pos": 96 }, { "type": "R", "before": "is extended to", "after": "has an extension that can be numerically represented by", "start_char_pos": 515, "end_char_pos": 529 }, { "type": "D", "before": "maximization", "after": null, "start_char_pos": 559, "end_char_pos": 571 }, { "type": "R", "before": "discuss the previous in", "after": "treat the case of", "start_char_pos": 582, "end_char_pos": 605 }, { "type": "R", "before": "optional measures with unit mass enables one", "after": "unit-mass optional measures enables", "start_char_pos": 745, "end_char_pos": 789 }, { "type": "D", "before": "completely", "after": null, "start_char_pos": 848, "end_char_pos": 858 }, { "type": "R", "before": "random-time horizon", "after": "random time-horizon", "start_char_pos": 1003, "end_char_pos": 1022 } ]
[ 0, 133, 435, 573, 681, 916 ]
0903.3736
2
We provide an axiomatic foundation for the representation of numeraire-invariant preferences of economic agents acting in a financial market. In a static environment, the simple axioms turn out to be equivalent to the following choice rule: the agent prefers one outcome over another if and only if the expected (under the agent's subjective probability) relative rate of return of the latter outcome with respect to the former is nonpositive. With the addition of a transitivity requirement, this last preference relation has an extension that can be numerically represented by expected logarithmic utility. We also treat the case of a dynamic environment , where consumption streams are the objects of choice. There, a novel result concerning a canonical representation of unit-mass optional measures enables to explicitly solve the investment-consumption problem by separating the two aspects of investment and consumption. Finally, we give an application to the problem of optimal numeraire investment with a random time-horizon.
We provide an axiomatic foundation for the representation of num\'{e preferences of economic agents acting in a financial market. In a static environment, the simple axioms turn out to be equivalent to the following choice rule: the agent prefers one outcome over another if and only if the expected (under the agent's subjective probability) relative rate of return of the latter outcome with respect to the former is nonpositive. With the addition of a transitivity requirement, this last preference relation has an extension that can be numerically represented by expected logarithmic utility. We also treat the case of a dynamic environment where consumption streams are the objects of choice. There, a novel result concerning a canonical representation of unit-mass optional measures enables us to explicitly solve the investment--consumption problem by separating the two aspects of investment and consumption. Finally, we give an application to the problem of optimal num\'{e investment with a random time-horizon.
[ { "type": "R", "before": "numeraire-invariant", "after": "num\\'{e", "start_char_pos": 61, "end_char_pos": 80 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 657, "end_char_pos": 658 }, { "type": "A", "before": null, "after": "us", "start_char_pos": 811, "end_char_pos": 811 }, { "type": "R", "before": "investment-consumption", "after": "investment--consumption", "start_char_pos": 836, "end_char_pos": 858 }, { "type": "R", "before": "numeraire", "after": "num\\'{e", "start_char_pos": 986, "end_char_pos": 995 } ]
[ 0, 141, 443, 608, 711, 927 ]
0903.4215
1
Multi-potent stem or progenitor cells undergo a sequential series of binary fate decisions which ultimately generates the diversity of differentiated cell . Efforts to understand cell fate control have focused on simple gene regulatory circuits that predict the presence of multiple stable states, bifurcations and switch-like transition . However, existing gene network models do not explain more complex properties of cell fate dynamics such as the hierarchical branching of developmental paths. Here, we construct a generic minimal model of the genetic regulatory network which naturally exhibits five elementary characteristics of cell differentiation: stability, branching, exclusivity, directionality, and promiscuous expression. We argue that a modular architecture comprised of repeated network elements in principle reproduces these features of differentiation by repressing irrelevant modules and hence , channeling the dynamics in the high-dimensional phase spacethrough a simpler, lower dimensional subspace . We implement our model both with ordinary differential equations (ODEs) to explore the role of bifurcations in producing the one-way character of differentiation and with stochastic differential equations (SDEs) to demonstrate the effect of noise on the system in simulations . We further argue that binary cell fate decisions are prevalent in cell differentiation due to general features of dynamical systems . This minimal model makes testable predictions about the structural basis for directional, discrete and diversifying cell phenotype development and thus , can guide the evaluation of real gene regulatory networks that govern differentiation.
Multi-potent stem or progenitor cells undergo a sequential series of binary fate decisions , which ultimately generate the diversity of differentiated cells . Efforts to understand cell fate control have focused on simple gene regulatory circuits that predict the presence of multiple stable states, bifurcations and switch-like transitions . However, existing gene network models do not explain more complex properties of cell fate dynamics such as the hierarchical branching of developmental paths. Here, we construct a generic minimal model of the genetic regulatory network controlling cell fate determination, which exhibits five elementary characteristics of cell differentiation: stability, directionality, branching, exclusivity, and promiscuous expression. We argue that a modular architecture comprising repeated network elements reproduces these features of differentiation by sequentially repressing selected modules and hence restricting the dynamics to lower dimensional subspaces of the high-dimensional state space . We implement our model both with ordinary differential equations (ODEs) , to explore the role of bifurcations in producing the one-way character of differentiation , and with stochastic differential equations (SDEs) , to demonstrate the effect of noise on the system . We further argue that binary cell fate decisions are prevalent in cell differentiation due to general features of the underlying dynamical system . This minimal model makes testable predictions about the structural basis for directional, discrete and diversifying cell phenotype development and thus can guide the evaluation of real gene regulatory networks that govern differentiation.
[ { "type": "R", "before": "which ultimately generates", "after": ", which ultimately generate", "start_char_pos": 91, "end_char_pos": 117 }, { "type": "R", "before": "cell", "after": "cells", "start_char_pos": 150, "end_char_pos": 154 }, { "type": "R", "before": "transition", "after": "transitions", "start_char_pos": 327, "end_char_pos": 337 }, { "type": "R", "before": "which naturally", "after": "controlling cell fate determination, which", "start_char_pos": 575, "end_char_pos": 590 }, { "type": "A", "before": null, "after": "directionality,", "start_char_pos": 668, "end_char_pos": 668 }, { "type": "D", "before": "directionality,", "after": null, "start_char_pos": 693, "end_char_pos": 708 }, { "type": "R", "before": "comprised of", "after": "comprising", "start_char_pos": 774, "end_char_pos": 786 }, { "type": "D", "before": "in principle", "after": null, "start_char_pos": 813, "end_char_pos": 825 }, { "type": "R", "before": "repressing irrelevant", "after": "sequentially repressing selected", "start_char_pos": 874, "end_char_pos": 895 }, { "type": "R", "before": ", channeling the dynamics in", "after": "restricting the dynamics to lower dimensional subspaces of", "start_char_pos": 914, "end_char_pos": 942 }, { "type": "R", "before": "phase spacethrough a simpler, lower dimensional subspace", "after": "state space", "start_char_pos": 964, "end_char_pos": 1020 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1095, "end_char_pos": 1095 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1186, "end_char_pos": 1186 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1237, "end_char_pos": 1237 }, { "type": "D", "before": "in simulations", "after": null, "start_char_pos": 1287, "end_char_pos": 1301 }, { "type": "R", "before": "dynamical systems", "after": "the underlying dynamical system", "start_char_pos": 1418, "end_char_pos": 1435 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1590, "end_char_pos": 1591 } ]
[ 0, 339, 497, 736, 1022, 1303, 1437 ]
0903.4542
1
We obtain the Maximum Entropy distribution for an asset from call and digital option prices. A rigorous mathematical proof of its existence and exponential form is given, which can also be applied to legitimize a formal derivation by Buchen and Kelly. We give a simple and robust algorithm for our method and compare our results to theirs. Finally, we present numerical results which show that our approach implies very realistic volatility surfaces even when calibrating only to at-the-money options .
We obtain the maximum entropy distribution for an asset from call and digital option prices. A rigorous mathematical proof of its existence and exponential form is given, which can also be applied to legitimise a formal derivation by Buchen and Kelly. We give a simple and robust algorithm for our method and compare our results to theirs. We present numerical results which show that our approach implies very realistic volatility surfaces even when calibrating only to at-the-money options . Finally, we apply our approach to options on the S P 500 index .
[ { "type": "R", "before": "Maximum Entropy", "after": "maximum entropy", "start_char_pos": 14, "end_char_pos": 29 }, { "type": "R", "before": "legitimize", "after": "legitimise", "start_char_pos": 200, "end_char_pos": 210 }, { "type": "R", "before": "Finally, we", "after": "We", "start_char_pos": 340, "end_char_pos": 351 }, { "type": "A", "before": null, "after": ". Finally, we apply our approach to options on the S", "start_char_pos": 501, "end_char_pos": 501 }, { "type": "A", "before": null, "after": "P 500 index", "start_char_pos": 502, "end_char_pos": 502 } ]
[ 0, 92, 251, 339 ]
0903.4833
1
It is well-known how to determine the price of perpetual American options if the underlying stock price is a time-homogeneous diffusion. In the present paper we consider the inverse problem, i.e. given prices of perpetual American options for different strikes we show how to construct a time-homogeneous model for the stock price which reproduces the given option prices.
It is well known how to determine the price of perpetual American options if the underlying stock price is a time-homogeneous diffusion. In the present paper we consider the inverse problem, that is, given prices of perpetual American options for different strikes , we show how to construct a time-homogeneous stock price model which reproduces the given option prices.
[ { "type": "R", "before": "well-known", "after": "well known", "start_char_pos": 6, "end_char_pos": 16 }, { "type": "R", "before": "i.e.", "after": "that is,", "start_char_pos": 191, "end_char_pos": 195 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 261, "end_char_pos": 261 }, { "type": "R", "before": "model for the stock price", "after": "stock price model", "start_char_pos": 306, "end_char_pos": 331 } ]
[ 0, 136 ]
0903.4844
1
Here we propose an objective scheme to quantify the precise amount of negative entropy, present in an extremely important biochemical pathway; namely, the TCA cycle. Our approach is based on the computational implementation of two-person non-cooperative finite zero-sum game between positive entropy and negative entropy . Biochemical analogue of Nash equilibrium condition between these positive and negative entropy, could unambiguously provide a quantitative marker that describes the 'edge of life' for TCA cycle . Difference between concentration-profiles prevalent at the 'edge of life' and biologically observed TCA cycle, could quantitatively express the precise amount of negative entropy present in a typical biochemical network. We show here that it is not the existence of mere order, but the synchronization profile between ordered fluctuations, which accounts for biological robustness. An exhaustive sensitivity analysis could identify the concentrations, for which slightest perturbation can account for enormous increase in positive entropy. Since our algorithm is general, the same analysis can as well be performed on larger networks and (ideally) for an entire cell, if numerical data for concentration is available .
Biological systems possess negative entropy. In them, one form of order produces another, URLanized form of order. We propose a formal scheme to calculate robustness of an entire biological system by quantifying the negative entropy present in it. Our Methodology is based upon a computational implementation of two-person non-cooperative finite zero-sum game between positive (physico-chemical) and negative (biological) entropy, present in the system(TCA cycle, for this work) . Biochemical analogue of Nash equilibrium , proposed here, could measure the robustness in TCA cycle in exact numeric terms, whereas the mixed strategy game between these entropies could quantitate the progression of stages of biological adaptation. Synchronization profile amongst macromolecular concentrations (even under environmental perturbations) is found to account for negative entropy and biological robustness. Emergence of synchronization profile was investigated with dynamically varying metabolite concentrations. Obtained results were verified with that from the deterministic simulation methods. Categorical plans to apply this algorithm in Cancer studies and anti-viral therapies are proposed alongside. From theoretical perspective, this work proposes a general, rigorous and alternative view of immunology .
[ { "type": "R", "before": "Here we propose an objective scheme to quantify the precise amount of negative entropy, present in an extremely important biochemical pathway; namely, the TCA cycle. Our approach is based on the", "after": "Biological systems possess negative entropy. In them, one form of order produces another, URLanized form of order. We propose a formal scheme to calculate robustness of an entire biological system by quantifying the negative entropy present in it. Our Methodology is based upon a", "start_char_pos": 0, "end_char_pos": 194 }, { "type": "R", "before": "entropy and negative entropy", "after": "(physico-chemical) and negative (biological) entropy, present in the system(TCA cycle, for this work)", "start_char_pos": 292, "end_char_pos": 320 }, { "type": "R", "before": "condition between these positive and negative entropy, could unambiguously provide a quantitative marker that describes the 'edge of life' for TCA cycle . Difference between concentration-profiles prevalent at the 'edge of life' and biologically observed TCA cycle, could quantitatively express the precise amount of negative entropy present in a typical biochemical network. We show here that it is not the existence of mere order, but the synchronization profile between ordered fluctuations, which accounts for biological robustness. An exhaustive sensitivity analysis could identify the concentrations, for which slightest perturbation can account for enormous increase in positive entropy. Since our algorithm is general, the same analysis can as well be performed on larger networks and (ideally) for an entire cell, if numerical data for concentration is available", "after": ", proposed here, could measure the robustness in TCA cycle in exact numeric terms, whereas the mixed strategy game between these entropies could quantitate the progression of stages of biological adaptation. Synchronization profile amongst macromolecular concentrations (even under environmental perturbations) is found to account for negative entropy and biological robustness. Emergence of synchronization profile was investigated with dynamically varying metabolite concentrations. Obtained results were verified with that from the deterministic simulation methods. Categorical plans to apply this algorithm in Cancer studies and anti-viral therapies are proposed alongside. From theoretical perspective, this work proposes a general, rigorous and alternative view of immunology", "start_char_pos": 364, "end_char_pos": 1235 } ]
[ 0, 142, 165, 322, 739, 900, 1058 ]
0904.0111
1
The mitotic spindle is an important intermediate structure in eucaryotic cell division, in which each of a pair of duplicated chromosomes is attached through microtubules to centrosomal bodies located close to the two poles of the dividing cell. It is widely believed that the spindle starts forming by the `capture' of chromosome pairs, held together by kinetochores, by randomly searching microtubules. We present a complete analytical formulation of this problem, in the case of a single fixed target and for arbitrary cell size. We derive a set of Green's functions for the microtubule dynamics and an associated set of first passage quantities. An implicit analytical expression for the probability distribution of the search time is then obtained , with appropriate boundary conditions at the outer cell membrane. We extract the conditions of optimized search from our formalism. Our results are in qualitative and semi-quantitative agreement with known experimental results for different cell types{\it .
The mitotic spindle is an important intermediate structure in eukaryotic cell division, in which each of a pair of duplicated chromosomes is attached through microtubules to centrosomal bodies located close to the two poles of the dividing cell. Several mechanisms are at work towards the formation of the spindle, one of which is the `capture' of chromosome pairs, held together by kinetochores, by randomly searching microtubules. Although the entire cell cycle can be up to 24 hours long, the mitotic phase typically takes only less than an hour. How does the cell keep the duration of mitosis within this limit? Previous theoretical studies have suggested that the chromosome search and capture is optimized by tuning the microtubule dynamic parameters to minimize the search time. In this paper, we examine this conjecture. We compute the mean search time for a single target by microtubules from a single nucleating site, using a systematic and rigorous theoretical approach, for arbitrary kinetic parameters. The result is extended to multiple targets and nucleating sites by physical arguments. Estimates of mitotic time scales are then obtained for different cells using experimental data. In yeast and mammalian cells, the observed changes in microtubule kinetics between interphase and mitosis are beneficial in reducing the search time. In{\it Xenopus extracts, by contrast, the opposite effect is observed, in agreement with the current understanding that large cells use additional mechanisms to regulate the duration of the mitotic phase .
[ { "type": "R", "before": "eucaryotic", "after": "eukaryotic", "start_char_pos": 62, "end_char_pos": 72 }, { "type": "R", "before": "It is widely believed that the spindle starts forming by", "after": "Several mechanisms are at work towards the formation of the spindle, one of which is", "start_char_pos": 246, "end_char_pos": 302 }, { "type": "R", "before": "We present a complete analytical formulation of this problem, in the case of a single fixed target and for arbitrary cell size. We derive a set of Green's functions for the microtubule dynamics and an associated set of first passage quantities. An implicit analytical expression for the probability distribution of the search time is then obtained , with appropriate boundary conditions at the outer cell membrane. We extract the conditions of optimized search from our formalism. Our results are in qualitative and semi-quantitative agreement with known experimental results for different cell types", "after": "Although the entire cell cycle can be up to 24 hours long, the mitotic phase typically takes only less than an hour. How does the cell keep the duration of mitosis within this limit? Previous theoretical studies have suggested that the chromosome search and capture is optimized by tuning the microtubule dynamic parameters to minimize the search time. In this paper, we examine this conjecture. We compute the mean search time for a single target by microtubules from a single nucleating site, using a systematic and rigorous theoretical approach, for arbitrary kinetic parameters. The result is extended to multiple targets and nucleating sites by physical arguments. Estimates of mitotic time scales are then obtained for different cells using experimental data. In yeast and mammalian cells, the observed changes in microtubule kinetics between interphase and mitosis are beneficial in reducing the search time. In", "start_char_pos": 405, "end_char_pos": 1005 }, { "type": "A", "before": null, "after": "Xenopus", "start_char_pos": 1010, "end_char_pos": 1010 }, { "type": "A", "before": null, "after": "extracts, by contrast, the opposite effect is observed, in agreement with the current understanding that large cells use additional mechanisms to regulate the duration of the mitotic phase", "start_char_pos": 1011, "end_char_pos": 1011 } ]
[ 0, 245, 404, 532, 649, 819, 885 ]
0904.0624
1
We provide a new approach to scenario generation for the purpose of risk management in the banking industry. We connect ideas from standard techniques -- like historical and Monte Carlo simulation -- to a hybrid technique that shares the advantages of standard procedures but reduces several of their drawbacks. Instead of considering the static problem of constructing one or ten day ahead distributions , we embed the problem into a dynamic framework, where any time horizon can be consistently simulated. Second , we use standard models from mathematical finance for each risk factor, bridging this way between the worlds of trading and risk management. Our approach is based on stochastic differential equations (SDEs) like the HJM-equation or the Black-Scholes equation governing the time evolution of risk factors, on an empirical calibration method to the market for the chosen SDEs, and on an Euler scheme (or high-order schemes) for the numerical implementation of the respective SDEs. Furthermore we are able to easily incorporate "middle-size" and "large-size" events within our framework . Results of a concrete implementation are provided. The method also allows a precise distinction between the information obtained from the market and the one coming from the necessary intuition of the risk manager .
We provide a new dynamic approach to scenario generation for the purposes of risk management in the banking industry. We connect ideas from conventional techniques -- like historical and Monte Carlo simulation -- and we come up with a hybrid method that shares the advantages of standard procedures but eliminates several of their drawbacks. Instead of considering the static problem of constructing one or ten day ahead distributions for vectors of risk factors , we embed the problem into a dynamic framework, where any time horizon can be consistently simulated. Additionally , we use standard models from mathematical finance for each risk factor, whence bridging the worlds of trading and risk management. Our approach is based on stochastic differential equations (SDEs) , like the HJM-equation or the Black-Scholes equation , governing the time evolution of risk factors, on an empirical calibration method to the market for the chosen SDEs, and on an Euler scheme (or high-order schemes) for the numerical evaluation of the respective SDEs. The empirical calibration procedure presented in this paper can be seen as the SDE-counterpart of the so called Filtered Historical Simulation method; the behavior of volatility stems in our case out of the assumptions on the underlying SDEs. Furthermore, we are able to easily incorporate "middle-size" and "large-size" events within our framework always making a precise distinction between the information obtained from the market and the one coming from the necessary a-priori intuition of the risk manager . Results of one concrete implementation are provided .
[ { "type": "A", "before": null, "after": "dynamic", "start_char_pos": 17, "end_char_pos": 17 }, { "type": "R", "before": "purpose", "after": "purposes", "start_char_pos": 58, "end_char_pos": 65 }, { "type": "R", "before": "standard", "after": "conventional", "start_char_pos": 132, "end_char_pos": 140 }, { "type": "R", "before": "to a hybrid technique", "after": "and we come up with a hybrid method", "start_char_pos": 201, "end_char_pos": 222 }, { "type": "R", "before": "reduces", "after": "eliminates", "start_char_pos": 277, "end_char_pos": 284 }, { "type": "A", "before": null, "after": "for vectors of risk factors", "start_char_pos": 406, "end_char_pos": 406 }, { "type": "R", "before": "Second", "after": "Additionally", "start_char_pos": 510, "end_char_pos": 516 }, { "type": "R", "before": "bridging this way between", "after": "whence bridging", "start_char_pos": 590, "end_char_pos": 615 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 725, "end_char_pos": 725 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 778, "end_char_pos": 778 }, { "type": "R", "before": "implementation", "after": "evaluation", "start_char_pos": 960, "end_char_pos": 974 }, { "type": "R", "before": "Furthermore", "after": "The empirical calibration procedure presented in this paper can be seen as the SDE-counterpart of the so called Filtered Historical Simulation method; the behavior of volatility stems in our case out of the assumptions on the underlying SDEs. Furthermore,", "start_char_pos": 999, "end_char_pos": 1010 }, { "type": "R", "before": ". Results of a concrete implementation are provided. The method also allows a", "after": "always making a", "start_char_pos": 1104, "end_char_pos": 1181 }, { "type": "A", "before": null, "after": "a-priori", "start_char_pos": 1289, "end_char_pos": 1289 }, { "type": "A", "before": null, "after": ". Results of one concrete implementation are provided", "start_char_pos": 1320, "end_char_pos": 1320 } ]
[ 0, 109, 312, 509, 658, 998, 1156 ]
0904.0947
1
Biochemical reaction networks in living cells usually involve reversible covalent modification of signaling molecules, such as protein phosphorylation. Under the frequent conditions of small molecule numbers, mass action theory becomes insufficient to describe the dynamics of such systems. Instead, the biochemical reactions must be treated as stochastic processes , producing intrinsic concentration fluctuations of the chemicals. We investigate the stochastic reaction kinetics of covalent modification cycles (CMCs) by analytical modelling and numerically exact Monte-Carlo simulation of the temporaly fluctuating concentrationx(t). The statistical behaviour of this simple network module turns out to be so rich that CMCs can be viewed as versatile and tunable noise generators. Depending on the parameter regime, we find for the probability density P(x) several qualitatively different classes of distribution functions, including powerlaw distributions with a fractional and tunable exponent. These findings challenge the traditional view of biochemical control networks as deterministic computational systems .
Biochemical reaction networks in living cells usually involve reversible covalent modification of signaling molecules, such as protein phosphorylation. Under conditions of small molecule numbers, as is frequently the case in living cells, mass action theory fails to describe the dynamics of such systems. Instead, the biochemical reactions must be treated as stochastic processes that intrinsically generate concentration fluctuations of the chemicals. We investigate the stochastic reaction kinetics of covalent modification cycles (CMCs) by analytical modelling and numerically exact Monte-Carlo simulation of the temporally fluctuating concentration. Depending on the parameter regime, we find for the probability density of the concentration qualitatively distinct classes of distribution functions, including powerlaw distributions with a fractional and tunable exponent. These findings challenge the traditional view of biochemical control networks as deterministic computational systems and suggest that CMCs in cells can function as versatile and tunable noise generators .
[ { "type": "D", "before": "the frequent", "after": null, "start_char_pos": 158, "end_char_pos": 170 }, { "type": "A", "before": null, "after": "as is frequently the case in living cells,", "start_char_pos": 209, "end_char_pos": 209 }, { "type": "R", "before": "becomes insufficient", "after": "fails", "start_char_pos": 229, "end_char_pos": 249 }, { "type": "R", "before": ", producing intrinsic", "after": "that intrinsically generate", "start_char_pos": 367, "end_char_pos": 388 }, { "type": "R", "before": "temporaly fluctuating concentrationx(t). The statistical behaviour of this simple network module turns out to be so rich that CMCs can be viewed as versatile and tunable noise generators.", "after": "temporally fluctuating concentration.", "start_char_pos": 597, "end_char_pos": 784 }, { "type": "R", "before": "P(x) several qualitatively different", "after": "of the concentration qualitatively distinct", "start_char_pos": 856, "end_char_pos": 892 }, { "type": "A", "before": null, "after": "and suggest that CMCs in cells can function as versatile and tunable noise generators", "start_char_pos": 1118, "end_char_pos": 1118 } ]
[ 0, 151, 291, 433, 637, 784, 1000 ]
0904.0989
1
We systematically investigate the community structure of the yeast protein interaction network . We employ methods that allow us to identify communities in the network at multiple resolutions as, a priori, there is no single scale of interest. We use novel, stringent tests to find strong evidence for a link between topology and function. Crucially, our tests control for the fact that interacting partners are more similar than a randomly-chosen pair, which is essential for a fair test of functional similarity. We find that many biologically homogeneous communities , that are robust over many resolutions, are surprisingly large. We thus not only identify complexes from the interaction data but also larger collections of strongly-interacting proteins . Communities that contain interactions from tandem affinity purification and co-immunoprecipitation data are far more likely to be biologically homogeneous than those from yeast-two-hybrid and split-ubiquitin data. We find that high clustering coefficient is a very good indicator of biological homogeneity in small communities. For larger communities, we find that link density is also important. Our results significantly improve the understanding of the modular structure of the protein interaction network- and how that modularity is reflected in biological homogeneity and experimental type. Our results suggest a method to select groups of functionally similar proteins even when no annotation is yet known, thereby yielding a valuable diagnostic tool to predict groups that act concertedly within the cell .
Motivation: If biology is modular then clusters, or communities, of proteins derived using only protein-protein interaction network structure might define protein modules with similar biological roles. We investigate the connection between biological modules and network communities in yeast and ask how the functional similarity of the communities that we find depends on the scales at which we probe the network. Results: We find many proteins lie in functionally homogeneous communities (a maximum of 2777 out of 4028 proteins) which suggests that network structure does indeed help identify sets of proteins with similar functions. The homogeneity of the communities depends on the scale selected. We use a novel test and two independent characterizations of protein function to determine the functional homogeneity of communities. We exploit the connection between network structure and biological function to select groups of proteins which are likely to participate in similar biological functions. We show that high mean clustering coefficient and low mean node betweenness centrality can be used to predict functionally homogeneous communities. Availability: All the data sets and the community detection algorithm are available online .
[ { "type": "R", "before": "We systematically investigate the community structure of the yeast protein interaction network . We employ methods that allow us to identify communities in", "after": "Motivation: If biology is modular then clusters, or communities, of proteins derived using only protein-protein interaction network structure might define protein modules with similar biological roles. We investigate the connection between biological modules and network communities in yeast and ask how the functional similarity of", "start_char_pos": 0, "end_char_pos": 155 }, { "type": "R", "before": "network at multiple resolutions as, a priori, there is no single scale of interest. We use novel, stringent tests to find strong evidence for a link between topology and function. Crucially, our tests control for the fact that interacting partners are more similar than a randomly-chosen pair, which is essential for a fair test of functional similarity. We find that many biologically homogeneous communities , that are robust over many resolutions, are surprisingly large. We thus not only identify complexes from the interaction data but also larger collections of strongly-interacting proteins . Communities that contain interactions from tandem affinity purification and co-immunoprecipitation data are far more likely to be biologically homogeneous than those from yeast-two-hybrid", "after": "communities that we find depends on the scales at which we probe the network. Results: We find many proteins lie in functionally homogeneous communities (a maximum of 2777 out of 4028 proteins) which suggests that network structure does indeed help identify sets of proteins with similar functions. The homogeneity of the communities depends on the scale selected. We use a novel test and two independent characterizations of protein function to determine the functional homogeneity of communities. We exploit the connection between network structure", "start_char_pos": 160, "end_char_pos": 947 }, { "type": "R", "before": "split-ubiquitin data. We find that high clustering coefficient is a very good indicator of biological homogeneity in small communities. For larger communities, we find that link density is also important. Our results significantly improve the understanding of the modular structure of the protein interaction network- and how that modularity is reflected in biological homogeneity and experimental type. Our results suggest a method", "after": "biological function", "start_char_pos": 952, "end_char_pos": 1384 }, { "type": "R", "before": "functionally similar proteins even when no annotation is yet known, thereby yielding a valuable diagnostic tool to predict groups that act concertedly within the cell", "after": "proteins which are likely to participate in similar biological functions. We show that high mean clustering coefficient and low mean node betweenness centrality can be used to predict functionally homogeneous communities. Availability: All the data sets and the community detection algorithm are available online", "start_char_pos": 1405, "end_char_pos": 1571 } ]
[ 0, 96, 243, 339, 514, 634, 973, 1087, 1156, 1355 ]
0904.0989
2
Motivation : If biology is modular then clusters, or communities, of proteins derived using only protein-protein interaction network structure might define protein modules with similar biological roles. We investigate the connection between biological modules and network communities in yeast and ask how the functional similarity of the communities that we find depends on the scales at which we probe the network. Results: We find many proteins lie in functionally homogeneous communities (a maximum of 2777 out of 4028 proteins) which suggests that network structure does indeed help identify sets of proteins with similar functions. The homogeneity of the communities depends on the scale selected . We use a novel test and two independent characterizations of protein function to determine the functional homogeneity of communities. We exploit the connection between network structure and biological function to select groups of proteins which are likely to participate in similar biological functions . We show that high mean clustering coefficient and low mean node betweenness centrality can be used to predict functionally homogeneouscommunities. Availability: All the data sets and the community detection algorithm are available online .
Background : If biology is modular then clusters, or communities, of proteins derived using only protein interaction network structure should define protein modules with similar biological roles. We investigate the link between biological modules and network communities in yeast and its relationship to the scale at which we probe the network. Results: Our results demonstrate that the functional homogeneity of communities depends on the scale selected , and that almost all proteins lie in a functionally homogeneous community at some scale. We judge functional homogeneity using a novel test and three independent characterizations of protein function , and find a high degree of overlap between these measures . We show that a high mean clustering coefficient of a community can be used to identify those that are functionally homogeneous. By tracing the community membership of a protein through multiple scales we demonstrate how our approach could be useful to biologists focusing on a particular protein. Conclusions: We show that there is no one scale of interest in the community structure of the yeast protein interaction network, but we can identify the range of resolution parameters that yield the most functionally coherent communities, and predict which communities are most likely to be functionally homogeneous .
[ { "type": "R", "before": "Motivation", "after": "Background", "start_char_pos": 0, "end_char_pos": 10 }, { "type": "R", "before": "protein-protein", "after": "protein", "start_char_pos": 97, "end_char_pos": 112 }, { "type": "R", "before": "might", "after": "should", "start_char_pos": 143, "end_char_pos": 148 }, { "type": "R", "before": "connection", "after": "link", "start_char_pos": 222, "end_char_pos": 232 }, { "type": "R", "before": "ask how the functional similarity of the communities that we find depends on the scales", "after": "its relationship to the scale", "start_char_pos": 297, "end_char_pos": 384 }, { "type": "R", "before": "We find many proteins lie in functionally homogeneous communities (a maximum of 2777 out of 4028 proteins) which suggests that network structure does indeed help identify sets of proteins with similar functions. The homogeneity of the", "after": "Our results demonstrate that the functional homogeneity of", "start_char_pos": 425, "end_char_pos": 659 }, { "type": "R", "before": ". We use", "after": ", and that almost all proteins lie in a functionally homogeneous community at some scale. We judge functional homogeneity using", "start_char_pos": 702, "end_char_pos": 710 }, { "type": "R", "before": "two", "after": "three", "start_char_pos": 728, "end_char_pos": 731 }, { "type": "R", "before": "to determine the functional homogeneity of communities. We exploit the connection between network structure and biological function to select groups of proteins which are likely to participate in similar biological functions", "after": ", and find a high degree of overlap between these measures", "start_char_pos": 782, "end_char_pos": 1006 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 1022, "end_char_pos": 1022 }, { "type": "R", "before": "and low mean node betweenness centrality", "after": "of a community", "start_char_pos": 1056, "end_char_pos": 1096 }, { "type": "R", "before": "predict functionally homogeneouscommunities. Availability: All the data sets and the community detection algorithm are available online", "after": "identify those that are functionally homogeneous. By tracing the community membership of a protein through multiple scales we demonstrate how our approach could be useful to biologists focusing on a particular protein. Conclusions: We show that there is no one scale of interest in the community structure of the yeast protein interaction network, but we can identify the range of resolution parameters that yield the most functionally coherent communities, and predict which communities are most likely to be functionally homogeneous", "start_char_pos": 1112, "end_char_pos": 1247 } ]
[ 0, 202, 415, 636, 703, 837, 1008, 1156 ]
0904.1078
1
We apply the quadratic hedging scheme developed by F\"ollmer , Schweizer, and Sondermann to European contingent products whose underlying asset is modeled using a GARCH process . The main contributions of this work consist of showing that local risk-minimizing strategies with respect to the physical measure do exist, even though an associated minimal martingale measure is only available in the presence of bounded innovations. More importantly, since those local risk-minimizing strategies are convoluted and difficult to evaluate, we introduce Girsanov-like risk-neutral measures for the log-prices that yield more tractable and useful results. Regarding this subject, we focus on GARCH time series models with Gaussian and multinomial innovations and we provide specific conditions under which those martingale measures are appropriate in the context of quadratic hedging. In the Gaussian case, those conditions have to do with the finiteness of the kurtosis and , for multinomial innovations, an inequality between the trend terms of the prices and of the volatility equations needs to be satisfied .
We apply a quadratic hedging scheme developed by Foellmer , Schweizer, and Sondermann to European contingent products whose underlying asset is modeled using a GARCH process and show that local risk-minimizing strategies with respect to the physical measure do exist, even though an associated minimal martingale measure is only available in the presence of bounded innovations. More importantly, since those local risk-minimizing strategies are in general convoluted and difficult to evaluate, we introduce Girsanov-like risk-neutral measures for the log-prices that yield more tractable and useful results. Regarding this subject, we focus on GARCH time series models with Gaussian innovations and we provide specific sufficient conditions that have to do with the finiteness of the kurtosis, under which those martingale measures are appropriate in the context of quadratic hedging. When this equivalent martingale measure is adapted to the price representation we are able to recover out of it the classical pricing formulas of Duan and Heston-Nandi, as well as hedging schemes that improve the performance of those proposed in the literature .
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 9, "end_char_pos": 12 }, { "type": "R", "before": "F\\\"ollmer", "after": "Foellmer", "start_char_pos": 51, "end_char_pos": 60 }, { "type": "R", "before": ". The main contributions of this work consist of showing", "after": "and show", "start_char_pos": 177, "end_char_pos": 233 }, { "type": "A", "before": null, "after": "in general", "start_char_pos": 497, "end_char_pos": 497 }, { "type": "D", "before": "and multinomial", "after": null, "start_char_pos": 725, "end_char_pos": 740 }, { "type": "R", "before": "conditions", "after": "sufficient conditions that have to do with the finiteness of the kurtosis,", "start_char_pos": 777, "end_char_pos": 787 }, { "type": "R", "before": "In the Gaussian case, those conditions have to do with the finiteness of", "after": "When this equivalent martingale measure is adapted to", "start_char_pos": 879, "end_char_pos": 951 }, { "type": "R", "before": "kurtosis and , for multinomial innovations, an inequality between the trend terms of the prices and of the volatility equations needs to be satisfied", "after": "price representation we are able to recover out of it the classical pricing formulas of Duan and Heston-Nandi, as well as hedging schemes that improve the performance of those proposed in the literature", "start_char_pos": 956, "end_char_pos": 1105 } ]
[ 0, 178, 429, 649, 878 ]
0904.1515
1
We investigate the structural and dynamical properties of the transcriptional regulatory network of the yeast {\it Saccharomyces cerevisiae} and compare them with a previously proposed ensemble of networks generated by mimicking the transcriptional regulation process within the cell. Even though the model ensemble successfully reproduces the degree distributions , degree-degree correlations and the k-core structure observed in Yeast , we find subtle differences in the structure that are reflected in the dynamics of regulatory-like processes . We use a Boolean model for the regulation dynamics and comment on the impact of various Boolean function classes that have been suggested to better represent in vivo regulatory interactions. In addition to an exceptionally large dynamical core network and an excess of self-intercting genes, we find that , even when these differences are eliminated, the Yeastaccommodates more dynamical attractors than best matching model networks which typically come with a single dominant attractor. We further investigate the robustness of the networks under minor perturbations. We find that , requiring all inputs of the Boolean functions to be nonredundant squeezes the stability of the system to a narrower band near the order-chaos boundary, while the network stability still depends strongly on the used function class. The difference between the model and the Yeast in terms of stability is marginal, which is consistent with the type of statistically outlier motifs found in the core .
We investigate the structural and dynamical properties of the transcriptional regulatory network of the yeast {\it Saccharomyces cerevisiae} and compare it with two unbiased ensembles: one obtained by reshuffling the edges and the other generated by mimicking the transcriptional regulation mechanism within the cell. Both ensembles reproduce the degree distributions (the first -by construction- exactly and the second approximately) , degree-degree correlations and the k-core structure observed in Yeast . An exceptionally large dynamically relevant core network found in Yeast in comparison with the second ensemble points to a strong bias towards a URLanization which is achieved by subtle modifications in the network's degree distributions . We use a Boolean model of regulatory dynamics with various classes of update functions to represent in vivo regulatory interactions. We find that the Yeast's core network has a qualitatively different behaviour, accommodating on average multiple attractors unlike typical members of both reference ensembles which converge to a single dominant attractor. Finally, we investigate the robustness of the networks and find that the stability depends strongly on the used function class. The robustness measure is squeezed into a narrower band around the order-chaos boundary when Boolean inputs are required to be nonredundant on each node. However, the difference between the reference models and the Yeast 's core is marginal, suggesting that the dynamically stable network elements are located mostly on the peripherals of the regulatory network. Consistently, the statistically significant three-node motifs in the dynamical core of Yeast turn out to be different from and less stable than those found in the full transcriptional regulatory network .
[ { "type": "R", "before": "them with a previously proposed ensemble of networks", "after": "it with two unbiased ensembles: one obtained by reshuffling the edges and the other", "start_char_pos": 153, "end_char_pos": 205 }, { "type": "R", "before": "process", "after": "mechanism", "start_char_pos": 260, "end_char_pos": 267 }, { "type": "R", "before": "Even though the model ensemble successfully reproduces the degree distributions", "after": "Both ensembles reproduce the degree distributions (the first -by construction- exactly and the second approximately)", "start_char_pos": 285, "end_char_pos": 364 }, { "type": "R", "before": ", we find subtle differences in the structure that are reflected in the dynamics of regulatory-like processes", "after": ". An exceptionally large dynamically relevant core network found in Yeast in comparison with the second ensemble points to a strong bias towards a URLanization which is achieved by subtle modifications in the network's degree distributions", "start_char_pos": 437, "end_char_pos": 546 }, { "type": "R", "before": "for the regulation dynamics and comment on the impact of various Boolean function classes that have been suggested to better", "after": "of regulatory dynamics with various classes of update functions to", "start_char_pos": 572, "end_char_pos": 696 }, { "type": "R", "before": "In addition to an exceptionally large dynamical core network and an excess of self-intercting genes, we find that , even when these differences are eliminated, the Yeastaccommodates more dynamical attractors than best matching model networks which typically come with", "after": "We find that the Yeast's core network has a qualitatively different behaviour, accommodating on average multiple attractors unlike typical members of both reference ensembles which converge to", "start_char_pos": 740, "end_char_pos": 1007 }, { "type": "R", "before": "We further", "after": "Finally, we", "start_char_pos": 1037, "end_char_pos": 1047 }, { "type": "R", "before": "under minor perturbations. We find that , requiring all inputs of the Boolean functions to be nonredundant squeezes the stability of the system to a narrower band near the order-chaos boundary, while the network stability still", "after": "and find that the stability", "start_char_pos": 1091, "end_char_pos": 1318 }, { "type": "A", "before": null, "after": "robustness measure is squeezed into a narrower band around the order-chaos boundary when Boolean inputs are required to be nonredundant on each node. However, the", "start_char_pos": 1368, "end_char_pos": 1368 }, { "type": "R", "before": "model", "after": "reference models", "start_char_pos": 1392, "end_char_pos": 1397 }, { "type": "R", "before": "in terms of stability", "after": "'s core", "start_char_pos": 1412, "end_char_pos": 1433 }, { "type": "R", "before": "which is consistent with the type of statistically outlier motifs found in the core", "after": "suggesting that the dynamically stable network elements are located mostly on the peripherals of the regulatory network. Consistently, the statistically significant three-node motifs in the dynamical core of Yeast turn out to be different from and less stable than those found in the full transcriptional regulatory network", "start_char_pos": 1447, "end_char_pos": 1530 } ]
[ 0, 284, 548, 739, 1036, 1117, 1363 ]
0904.1587
1
Biochemical processes typically involve huge numbers of individual steps, each with its own dynamical rate constants. For example, kinetic proofreading processes rely upon numerous sequential reactions in order to guarantee the precise construction of specific macromolecules Hopfield, 1974 . In this work, we study the transient properties of such systems and fully characterize their first passage time (completion) distributions. In particular, we provide explicit expressions for the mean and the variance of the kinetic proofreading completion time . We find that, for a wide range of parameters, as the system size grows, the completion time behavior simplifies: it becomes either deterministic or exponentially distributed, with a very narrow transition between the two regimes. In both regimes, the full system dynamical complexity is trivial compared to its apparent structural complexity. Similar simplicity will arise in the dynamics of other complex biochemical processes. In particular, these findings suggest not only that one may not be able to understand individual elementary reactions from macroscopic observations, but also that such understanding may be unnecessary.
Biochemical processes typically involve huge numbers of individual reversible steps, each with its own dynamical rate constants. For example, kinetic proofreading processes rely upon numerous sequential reactions in order to guarantee the precise construction of specific macromolecules . In this work, we study the transient properties of such systems and fully characterize their first passage (completion) time distributions. In particular, we provide explicit expressions for the mean and the variance of the completion time for a kinetic proofreading process and computational analyses for more complicated biochemical systems . We find that, for a wide range of parameters, as the system size grows, the completion time behavior simplifies: it becomes either deterministic or exponentially distributed, with a very narrow transition between the two regimes. In both regimes, the dynamical complexity of the full system is trivial compared to its apparent structural complexity. Similar simplicity is likely to arise in the dynamics of many complex multi-step biochemical processes. In particular, these findings suggest not only that one may not be able to understand individual elementary reactions from macroscopic observations, but also that such understanding may be unnecessary.
[ { "type": "A", "before": null, "after": "reversible", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "D", "before": "Hopfield, 1974", "after": null, "start_char_pos": 277, "end_char_pos": 291 }, { "type": "D", "before": "time", "after": null, "start_char_pos": 401, "end_char_pos": 405 }, { "type": "A", "before": null, "after": "time", "start_char_pos": 419, "end_char_pos": 419 }, { "type": "R", "before": "kinetic proofreading completion time", "after": "completion time for a kinetic proofreading process and computational analyses for more complicated biochemical systems", "start_char_pos": 519, "end_char_pos": 555 }, { "type": "R", "before": "full system dynamical complexity", "after": "dynamical complexity of the full system", "start_char_pos": 809, "end_char_pos": 841 }, { "type": "R", "before": "will", "after": "is likely to", "start_char_pos": 920, "end_char_pos": 924 }, { "type": "R", "before": "other complex", "after": "many complex multi-step", "start_char_pos": 950, "end_char_pos": 963 } ]
[ 0, 118, 434, 557, 787, 900, 986 ]
0904.1653
1
The present paper provides a multi-period contagion model in the credit risk field. Our model is an extension of Davis and Lo's infectious default model. We consider an economy of n firms which may default directly or may be infected by another defaulting firm (a domino effect being also possible). The spontaneous default without external influence and the infections are described by not necessary independent Bernoulli-type random variables. Moreover, several contaminations could be necessary to infect another firm. In this paper we compute the probability distribution function of the total number of defaults in a dependency context. We also give a simple recursive algorithm to compute this distribution in an exchangeability context. Numerical applications illustrate the impact of exchangeability among direct defaults and among contaminations, on different indicators calculated from the law of the total number of defaults .
The present paper provides a multi-period contagion model in the credit risk field. Our model is an extension of Davis and Lo's infectious default model. We consider an economy of n firms which may default directly or may be infected by other defaulting firms (a domino effect being also possible). The spontaneous default without external influence and the infections are described by not necessarily independent Bernoulli-type random variables. Moreover, several contaminations could be required to infect another firm. In this paper we compute the probability distribution function of the total number of defaults in a dependency context. We also give a simple recursive algorithm to compute this distribution in an exchangeability context. Numerical applications illustrate the impact of exchangeability among direct defaults and among contaminations, on different indicators calculated from the law of the total number of defaults . We then examine the calibration of the model on iTraxx data before and during the crisis. The dynamic feature together with the contagion effect seem to have a significant impact on the model performance, especially during the recent distressed period .
[ { "type": "R", "before": "another defaulting firm", "after": "other defaulting firms", "start_char_pos": 237, "end_char_pos": 260 }, { "type": "R", "before": "necessary", "after": "necessarily", "start_char_pos": 391, "end_char_pos": 400 }, { "type": "R", "before": "necessary", "after": "required", "start_char_pos": 488, "end_char_pos": 497 }, { "type": "A", "before": null, "after": ". We then examine the calibration of the model on iTraxx data before and during the crisis. The dynamic feature together with the contagion effect seem to have a significant impact on the model performance, especially during the recent distressed period", "start_char_pos": 936, "end_char_pos": 936 } ]
[ 0, 83, 153, 299, 445, 521, 641, 743 ]
0904.1798
1
The absence of arbitrages of the first kind, a weakening of the " No Free Lunch with Vanishing Risk " condition, is analyzed in a general semimartingale financial market model. In the spirit of the Fundamental Theorem of Asset Pricing (FTAP) , it is shown that there is absence of arbitrages of the first kind in the market if and only if an equivalent local martingale deflator (ELMD) exists. An ELMD is a strictly positive process that , when deflated by it, discounted nonnegative wealth processes become local martingales. In terms of measures, absence of arbitrages of the first kind is shown to be equivalent to the existence of a finitely additive probability, weakly equivalent to the original and locally countably additive, under which the discounted asset-price process is a "local martingale ". Finally, the aforementioned results are used to obtain an independent proof of the FTAP .
The absence of arbitrages of the first kind, a weakening of the `` No Free Lunch with Vanishing Risk '' condition, is analyzed in a general semimartingale financial market model. In the spirit of the Fundamental Theorem of Asset Pricing , it is shown that there is equivalence between the absence of arbitrages of the first kind and the existence of a strictly positive process that acts as a local martingale deflator on nonnegative wealth processes .
[ { "type": "R", "before": "\"", "after": "``", "start_char_pos": 64, "end_char_pos": 65 }, { "type": "R", "before": "\"", "after": "''", "start_char_pos": 100, "end_char_pos": 101 }, { "type": "D", "before": "(FTAP)", "after": null, "start_char_pos": 235, "end_char_pos": 241 }, { "type": "A", "before": null, "after": "equivalence between the", "start_char_pos": 270, "end_char_pos": 270 }, { "type": "R", "before": "in the market if and only if an equivalent local martingale deflator (ELMD) exists. An ELMD is", "after": "and the existence of", "start_char_pos": 311, "end_char_pos": 405 }, { "type": "R", "before": ", when deflated by it, discounted nonnegative wealth processes become local martingales. In terms of measures, absence of arbitrages of the first kind is shown to be equivalent to the existence of a finitely additive probability, weakly equivalent to the original and locally countably additive, under which the discounted asset-price process is a \"local martingale \". Finally, the aforementioned results are used to obtain an independent proof of the FTAP", "after": "acts as a local martingale deflator on nonnegative wealth processes", "start_char_pos": 439, "end_char_pos": 895 } ]
[ 0, 176, 394, 527, 807 ]
0904.1798
2
The absence of arbitrages of the first kind, a weakening of the ``No Free Lunch with Vanishing Risk'' condition, is analyzed in a general semimartingale financial market model . In the spirit of the Fundamental Theorem of Asset Pricing , it is shown that there is equivalence between the absence of arbitrages of the first kind and the existence of a strictly positive process that acts as a local martingale deflator on nonnegative wealth processes.
In a semimartingale financial market model , it is shown that there is equivalence between absence of arbitrages of the first kind (a weak viability condition) and the existence of a strictly positive process that acts as a local martingale deflator on nonnegative wealth processes.
[ { "type": "R", "before": "The absence of arbitrages of the first kind, a weakening of the ``No Free Lunch with Vanishing Risk'' condition, is analyzed in a general", "after": "In a", "start_char_pos": 0, "end_char_pos": 137 }, { "type": "D", "before": ". In the spirit of the Fundamental Theorem of Asset Pricing", "after": null, "start_char_pos": 176, "end_char_pos": 235 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 284, "end_char_pos": 287 }, { "type": "A", "before": null, "after": "(a weak viability condition)", "start_char_pos": 328, "end_char_pos": 328 } ]
[ 0, 177 ]
0904.1805
1
To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modelling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach .
To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modeling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach . In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed .
[ { "type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 201, "end_char_pos": 210 }, { "type": "A", "before": null, "after": ". In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed", "start_char_pos": 383, "end_char_pos": 383 } ]
[ 0, 185, 274 ]
0904.2913
1
A study of the boundedness in probability of the set of possible wealth outcomes of an economic agent facing constraints, and with limited access to information , is undertaken. The wealth-process set is abstractly structured with reasonable economic properties, instead of the usual practice of taking it to consist of stochastic integrals against a semimartingale integrator. We obtain the equivalence of (a) the boundedness in probability of wealth outcomes with (b) the existence of at least one deflator that make the deflated wealth processes have a generalized supermartingale property. Specializing in the case of full information, we obtain as a consequence that in a viable market all wealth processes have versions that are semimartingales .
We undertake a study of market viability from the perspective of a financial agent with limited access to information . The set of wealth processes available to the agent is structured with reasonable economic properties, instead of the usual practice of taking it to consist of stochastic integrals against a semimartingale integrator. We obtain the equivalence of the boundedness in probability of the set of terminal wealth outcomes with the existence of at least one strictly positive deflator that makes the deflated wealth processes have a generalized supermartingale property. Specializing to the case of full agent's information, we obtain as a consequence that in a viable market every properly discounted wealth processes has a version that is a semimartingale .
[ { "type": "R", "before": "A study of the boundedness in probability of the set of possible wealth outcomes of an economic agent facing constraints, and", "after": "We undertake a study of market viability from the perspective of a financial agent", "start_char_pos": 0, "end_char_pos": 125 }, { "type": "R", "before": ", is undertaken. The wealth-process set is abstractly", "after": ". The set of wealth processes available to the agent is", "start_char_pos": 161, "end_char_pos": 214 }, { "type": "D", "before": "(a)", "after": null, "start_char_pos": 407, "end_char_pos": 410 }, { "type": "A", "before": null, "after": "the set of terminal", "start_char_pos": 445, "end_char_pos": 445 }, { "type": "D", "before": "(b)", "after": null, "start_char_pos": 467, "end_char_pos": 470 }, { "type": "R", "before": "deflator that make", "after": "strictly positive deflator that makes", "start_char_pos": 501, "end_char_pos": 519 }, { "type": "R", "before": "in", "after": "to", "start_char_pos": 608, "end_char_pos": 610 }, { "type": "A", "before": null, "after": "agent's", "start_char_pos": 628, "end_char_pos": 628 }, { "type": "R", "before": "all wealth processes have versions that are semimartingales", "after": "every properly discounted wealth processes has a version that is a semimartingale", "start_char_pos": 693, "end_char_pos": 752 } ]
[ 0, 177, 377, 594 ]
0904.2913
2
We undertake a study of market viability from the perspective of a financial agent with limited access to information. The set of wealth processes available to the agent is structured with reasonable economic properties, instead of the usual practice of taking it to consist of stochastic integrals against a semimartingale integrator. We obtain the equivalence of the boundedness in probability of the set of terminal wealth outcomes with the existence of at least one strictly positive deflator that makes the deflated wealth processes have a generalized supermartingale property. Specializing to the case of full agent's information, we obtain as a consequence that in a viable market every properly discounted wealth processes has a version that is a semimartingale.
We undertake a study of markets from the perspective of a financial agent with limited access to information. The set of wealth processes available to the agent is structured with reasonable economic properties, instead of the usual practice of taking it to consist of stochastic integrals against a semimartingale integrator. We obtain the equivalence of the boundedness in probability of the set of terminal wealth outcomes (which in turn is equivalent to the weak market viability condition of absence of arbitrage of the first kind) with the existence of at least one strictly positive deflator that makes the deflated wealth processes have a generalized supermartingale property. Specializing to the case of full agent's information, we obtain as a consequence that in a viable market every properly discounted wealth processes has a version that is a semimartingale.
[ { "type": "R", "before": "market viability", "after": "markets", "start_char_pos": 24, "end_char_pos": 40 }, { "type": "A", "before": null, "after": "(which in turn is equivalent to the weak market viability condition of absence of arbitrage of the first kind)", "start_char_pos": 435, "end_char_pos": 435 } ]
[ 0, 118, 335, 583 ]
0904.4074
1
In this paper, we model dependence between operational risks by allowing risk profiles to evolve stochastically in time and to be dependent. This allows for a flexible correlation structure where the dependence between frequencies of different risk categories and between severities of different risk categories as well as within risk categories can be modelled . The model is estimated using the Bayesian inference methodology, allowing for combination of internal data, external data and expert opinion in the estimation procedure. We use a specialized Markov chain Monte Carlo simulation methodology known as Slice sampling to obtain samples from the resulting posterior distribution and estimate the model parameters.
In this paper, we model dependence between operational risks by allowing risk profiles to evolve stochastically in time and to be dependent. This allows for a flexible correlation structure where the dependence between frequencies of different risk categories and between severities of different risk categories as well as within risk categories can be modeled . The model is estimated using Bayesian inference methodology, allowing for combination of internal data, external data and expert opinion in the estimation procedure. We use a specialized Markov chain Monte Carlo simulation methodology known as Slice sampling to obtain samples from the resulting posterior distribution and estimate the model parameters.
[ { "type": "R", "before": "modelled", "after": "modeled", "start_char_pos": 353, "end_char_pos": 361 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 393, "end_char_pos": 396 } ]
[ 0, 140, 363, 533 ]
0904.4099
1
In the present work we address the problem of evaluating the historical performance of a trading strategy or a certain portfolio of assets. Common indicators such as the Sharpe ratio and the risk adjusted return have significant drawbacks. In particular, they are global indices, that is they do not preserve any %DIFDELCMD < {\em %%% local information about the performance dynamics either in time or for a particular investment horizon. This information could be fundamental for practitioners as the past performance can be affected by the non-stationarity of financial market. In order to highlight this feature, we introduce the local risk decomposition (LRD) formalism, where dynamical information about a strategy's performance is retained. This framework, motivated by the multi-scaling techniques used in complex system theory, is particularly suitable for high-frequency trading systems and can be applied into problems of portfolio optimization.
In the present work we address the problem of evaluating the historical performance of a trading strategy or a certain portfolio of assets. Common indicators such as the Sharpe ratio and the risk adjusted return have significant drawbacks. In particular, they are global indices, that is they do not preserve any %DIFDELCMD < {\em %%% 'local' information about the performance dynamics either in time or for a particular investment horizon. This information could be fundamental for practitioners as the past performance can be affected by the non-stationarity of financial market. In order to highlight this feature, we introduce the ' local risk decomposition ' (LRD) formalism, where dynamical information about a strategy's performance is retained. This framework, motivated by the multi-scaling techniques used in complex system theory, is particularly suitable for high-frequency trading systems and can be applied into problems of strategy optimization.
[ { "type": "R", "before": "local", "after": "'local'", "start_char_pos": 335, "end_char_pos": 340 }, { "type": "A", "before": null, "after": "'", "start_char_pos": 633, "end_char_pos": 633 }, { "type": "A", "before": null, "after": "'", "start_char_pos": 659, "end_char_pos": 659 }, { "type": "R", "before": "portfolio", "after": "strategy", "start_char_pos": 934, "end_char_pos": 943 } ]
[ 0, 139, 239, 438, 579, 748 ]
0904.4155
1
This paper discovers fundamental principles of the backoff process that governs the performance of IEEE 802.11. We first {\it establish} that the so-called mean field technique, which spins off a fixed point equation, is mathematically valid for use in performance analysis of 802.11. On the basis of this, succinct equations describing the backoff distribution as a function of the collision probability \gamma are derived , which also shed light on a controversy in the field . In addition, the observation that the {\it entropy} of the backoff process in 802.11 increases with the number of nodes leads us to see through a Poissonian character inherent in 802.11. However, it is also found that the {\it collision} effect between nodes prevails over the {\it Poissonian aggregation} effect in spite of its tendency to increase with the number of nodes. Based on these findings, we formulate the principle about the inter-transmission probability that lays a foundation for the short-term fairness analysis. Another principle discovered upon regular variation theory is that the per-packet backoff has a truncated {\it Pareto-type} tail distribution with an exponent of (\log \gamma)/\log m (m is the multiplicative factor). This reveals that the backoff process is heavy-tailed in the strict sense for m^2 \gamma > 1, essentially due to collision. Moreover, we {\it show that the inter-transmission probability undergoes a dramatic change at \gamma_0=1/m^2 and falls into two qualitatively distinct categories: either approximately Gaussian or {\it L\'evy} \alpha-stable distribution with \alpha \in (1,2) entailing infinite variances, {\it leaning} tendency, and {\it directional} unfairness.
This paper discovers fundamental principles of the backoff process that governs the performance of IEEE 802.11. We first {\it establish} that the so-called mean field technique, which spins off a fixed point equation, is mathematically valid for use in performance analysis of 802.11. On the basis of this, succinct equations describing the backoff distribution as a function of the collision probability \gamma are derived . In addition, the observation that the {\it entropy} of the backoff process in 802.11 increases with the number of nodes leads us to see through a Poissonian character inherent in 802.11. However, it is also found that the {\it collision} effect between nodes prevails over the {\it Poissonian aggregation} effect in spite of its tendency to increase with the number of nodes. Based on these findings, we formulate the principle about the inter-transmission probability that lays a foundation for the short-term fairness analysis. Another principle discovered upon regular variation theory is that the per-packet backoff has a truncated {\it Pareto-type} tail distribution with an exponent of (\log \gamma)/\log m (m is the multiplicative factor). This reveals that the backoff process is heavy-tailed in the strict sense for m^2 \gamma > 1, essentially due to collision. Moreover, we identify the{\it long-range dependence in 802.11 and show that the inter-transmission probability undergoes a dramatic change at \gamma_0=1/m^2 and falls into two qualitatively distinct categories: either approximately Gaussian or {\it L\'evy} \alpha-stable distribution with \alpha \in (1,2) entailing infinite variances, {\it leaning} tendency, and {\it directional} unfairness.
[ { "type": "D", "before": ", which also shed light on a controversy in the field", "after": null, "start_char_pos": 424, "end_char_pos": 477 }, { "type": "A", "before": null, "after": "identify the", "start_char_pos": 1364, "end_char_pos": 1364 }, { "type": "A", "before": null, "after": "long-range dependence", "start_char_pos": 1369, "end_char_pos": 1369 }, { "type": "A", "before": null, "after": "in 802.11 and", "start_char_pos": 1370, "end_char_pos": 1370 } ]
[ 0, 111, 284, 479, 666, 855, 1009, 1226, 1350 ]
0904.4155
3
This paper discovers fundamental principles of the backoff process that governs the performance of IEEE 802.11. We first make a simplistic Palm interpretation of the Bianchi's formula and on the basis of which, succinct equations describing the backoff distribution as a function of the collision probability \gamma are derived, which also correct a possible misunderstanding in the field. The observation that the entropy of the backoff process in 802.11 increases with the number of nodes leads us to see through a Poissonian character inherent in 802.11. However, it is also found that the collision effect between nodes prevails over the Poissonian aggregation effect in spite of its tendency to increase with the number of nodes. Based on these findings, we formulate the principle about the inter-transmission probability that lays a foundation for the short-term fairness analysis. Another principle discovered upon regular variation theory is that the backoff times have a truncated Pareto-type tail distribution with an exponent of (\log \gamma)/\log m (m is the multiplicative factor ). This reveals that the backoff process is heavy-tailed in the strict sense for m^2 \gamma>1 , essentially due to collision. Moreover, we identify long-range dependence in 802.11 through both of mathematical and empirical wavelet-based analyses and answer a riddle: the absence of long range dependence in aggregate total load. We also show that the inter-transmission probability undergoes a dramatic change at \gamma_0=1/m^2 and falls into two qualitatively distinct categories: either approximately Gaussian or L\'evy \alpha-stable distribution with \alpha \in (1,2) .
This paper discovers fundamental principles of the backoff process that governs the performance of IEEE 802.11. A simplistic principle founded upon regular variation theory is that the backoff time has a truncated Pareto-type tail distribution with an exponent of (\log \gamma)/\log m (m is the multiplicative factor and \gamma is the collision probability ). This reveals that the per-node backoff process is heavy-tailed in the strict sense for \gamma>1 /m^2, and paves the way for the following unifying result. The state-of-the-art theory on the superposition of the heavy-tailed processes is applied to establish a dichotomy exhibited by the aggregate backoff process, putting emphasis on the importance of time-scale on which we view the backoff processes. While the aggregation on normal time-scales leads to a Poisson process, it is approximated by a new limiting process possessing long-range dependence (LRD) on coarse time-scales. This dichotomy turns out to be instrumental in formulating short-term fairness, extending existing formulas to arbitrary population, and to elucidate the absence of LRD in practical situations. A refined wavelet analysis is conducted to strengthen this argument .
[ { "type": "R", "before": "We first make a simplistic Palm interpretation of the Bianchi's formula and on the basis of which, succinct equations describing the backoff distribution as a function of the collision probability \\gamma are derived, which also correct a possible misunderstanding in the field. The observation that the entropy of the backoff process in 802.11 increases with the number of nodes leads us to see through a Poissonian character inherent in 802.11. However, it is also found that the collision effect between nodes prevails over the Poissonian aggregation effect in spite of its tendency to increase with the number of nodes. Based on these findings, we formulate the principle about the inter-transmission probability that lays a foundation for the short-term fairness analysis. Another principle discovered", "after": "A simplistic principle founded", "start_char_pos": 112, "end_char_pos": 917 }, { "type": "R", "before": "times have", "after": "time has", "start_char_pos": 968, "end_char_pos": 978 }, { "type": "A", "before": null, "after": "and \\gamma is the collision probability", "start_char_pos": 1094, "end_char_pos": 1094 }, { "type": "A", "before": null, "after": "per-node", "start_char_pos": 1120, "end_char_pos": 1120 }, { "type": "D", "before": "m^2", "after": null, "start_char_pos": 1177, "end_char_pos": 1180 }, { "type": "R", "before": ", essentially due to collision. Moreover, we identify", "after": "/m^2, and paves the way for the following unifying result. The state-of-the-art theory on the superposition of the heavy-tailed processes is applied to establish a dichotomy exhibited by the aggregate backoff process, putting emphasis on the importance of time-scale on which we view the backoff processes. While the aggregation on normal time-scales leads to a Poisson process, it is approximated by a new limiting process possessing", "start_char_pos": 1190, "end_char_pos": 1243 }, { "type": "R", "before": "in 802.11 through both of mathematical and empirical wavelet-based analyses and answer a riddle:", "after": "(LRD) on coarse time-scales. This dichotomy turns out to be instrumental in formulating short-term fairness, extending existing formulas to arbitrary population, and to elucidate", "start_char_pos": 1266, "end_char_pos": 1362 }, { "type": "R", "before": "long range dependence in aggregate total load. We also show that the inter-transmission probability undergoes a dramatic change at \\gamma_0=1/m^2 and falls into two qualitatively distinct categories: either approximately Gaussian or L\\'evy \\alpha-stable distribution with \\alpha \\in (1,2)", "after": "LRD in practical situations. A refined wavelet analysis is conducted to strengthen this argument", "start_char_pos": 1378, "end_char_pos": 1666 } ]
[ 0, 111, 389, 557, 734, 888, 1097, 1221, 1424 ]
0904.4430
1
We present a simple model of firm rating evolution and resulting bankruptcies, taking into account two sources of defaults: individual dynamics of economic development and ordering interactions between firms. We show that such a defined model leads to phase transition, which results in collective defaults. Our results mean that, in the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strengthof firms' interactions from the risk point of view . For small interaction strength parameters there are many independent bankruptcies of individual companies. For large parameters there are giant collective defaults of firm clusters .
We present a simple model of firm rating evolution . We consider two sources of defaults: individual dynamics of economic development and Potts-like interactions between firms. We show that such a defined model leads to phase transition, which results in collective defaults. The existence of the collective phase depends on the mean interaction strength . For small interaction strength parameters , there are many independent bankruptcies of individual companies. For large parameters , there are giant collective defaults of firm clusters . In the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strength of the firm's interactions from the systemic risk point of view .
[ { "type": "R", "before": "and resulting bankruptcies, taking into account", "after": ". We consider", "start_char_pos": 51, "end_char_pos": 98 }, { "type": "R", "before": "ordering", "after": "Potts-like", "start_char_pos": 172, "end_char_pos": 180 }, { "type": "R", "before": "Our results mean that, in the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strengthof firms' interactions from the risk point of view", "after": "The existence of the collective phase depends on the mean interaction strength", "start_char_pos": 308, "end_char_pos": 489 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 534, "end_char_pos": 534 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 621, "end_char_pos": 621 }, { "type": "A", "before": null, "after": ". In the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strength of the firm's interactions from the systemic risk point of view", "start_char_pos": 675, "end_char_pos": 675 } ]
[ 0, 208, 307, 491, 599 ]
0905.0129
1
We study the dynamics of correlation and variance in systems under the load of environmental factors. A universal effect in ensembles of similar systems under load of similar factors is described: in crisis, typically, even before obvious symptoms of crisis appear, correlation increases, and, at the same time, variance (and volatility) increases too. After the crisis achieves its bottom, it can develop into two directions: recovering (both correlations and variance decrease) or fatal catastrophe (correlations decrease, but variance not). This effect is supported by many experiments and observation of groups of humans, mice, trees, grassy plants, and on financial time series. A general approach to explanation of the effect through dynamics of adaptation is developed. URLanization of interaction between factors (Liebig's versus synergistic systems) lead to different adaptation dynamics. This gives an explanation to qualitatively different dynamics of correlation under different types of load .
We study the dynamics of correlation and variance in systems under the load of environmental factors. A universal effect in ensembles of similar systems under the load of similar factors is described: in crisis, typically, even before obvious symptoms of crisis appear, correlation increases, and, at the same time, variance (and volatility) increases too. This effect is supported by many experiments and observations of groups of humans, mice, trees, grassy plants, and on financial time series. A general approach to the explanation of the effect through dynamics of individual adaptation of similar non-interactive individuals to a similar system of external factors is developed. Qualitatively, this approach follows Selye's idea about adaptation energy .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 159, "end_char_pos": 159 }, { "type": "D", "before": "After the crisis achieves its bottom, it can develop into two directions: recovering (both correlations and variance decrease) or fatal catastrophe (correlations decrease, but variance not).", "after": null, "start_char_pos": 354, "end_char_pos": 544 }, { "type": "R", "before": "observation", "after": "observations", "start_char_pos": 594, "end_char_pos": 605 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 707, "end_char_pos": 707 }, { "type": "R", "before": "adaptation", "after": "individual adaptation of similar non-interactive individuals to a similar system of external factors", "start_char_pos": 754, "end_char_pos": 764 }, { "type": "R", "before": "URLanization of interaction between factors (Liebig's versus synergistic systems) lead to different adaptation dynamics. This gives an explanation to qualitatively different dynamics of correlation under different types of load", "after": "Qualitatively, this approach follows Selye's idea about adaptation energy", "start_char_pos": 779, "end_char_pos": 1006 } ]
[ 0, 101, 353, 544, 684, 778, 899 ]
0905.1882
1
We consider the problem of option pricing under stochastic volatility models, focusing on the two processes known as exponential Ornstein-Uhlenbeck and Stein-Stein. We show they admit the same limit dynamics in the regime of low fluctuations of the volatility process, under which we derive the expressions of the characteristic function and the first four cumulants for the risk neutral probability measure. This allows us to obtain a semi-closed form for European option prices , based on Lewis ' approach. We deeply analyze the case of Plain Vanilla calls, being liquid instruments for which reliable implied volatility surfaces are available. We implement a conceptually simple two steps calibration procedure which considerably reduces the computational burden and we test it against a data set of options traded on the Milan Stock Exchange. Our results show a good agreement with the market data for all the considered models. In particular, the fitted parameters suggest the risk neutral dynamics is in a low volatility fluctuation regime, which supports the reliability of the linear approximation.
We consider the problem of option pricing under stochastic volatility models, focusing on the linear approximation of the two processes known as exponential Ornstein-Uhlenbeck and Stein-Stein. Indeed, we show they admit the same limit dynamics in the regime of low fluctuations of the volatility process, under which we derive the exact expression of the characteristic function associated to the risk neutral probability density. Its knowledge allows us to compute option prices exploiting Lewis and Lipton formula. We analyze in detail the case of Plain Vanilla calls, being liquid instruments for which reliable implied volatility surfaces are available. We also compute the analytical expressions of the first four cumulants, being crucial to implement a simple two steps calibration procedure . It has been tested against a data set of options traded on the Milan Stock Exchange. The analysis we present reveals a good agreement with the market implied surfaces and corroborates the accuracy of the linear approximation.
[ { "type": "A", "before": null, "after": "linear approximation of the", "start_char_pos": 94, "end_char_pos": 94 }, { "type": "R", "before": "We", "after": "Indeed, we", "start_char_pos": 166, "end_char_pos": 168 }, { "type": "R", "before": "expressions", "after": "exact expression", "start_char_pos": 296, "end_char_pos": 307 }, { "type": "R", "before": "and the first four cumulants for the", "after": "associated to the", "start_char_pos": 339, "end_char_pos": 375 }, { "type": "R", "before": "measure. This", "after": "density. Its knowledge", "start_char_pos": 401, "end_char_pos": 414 }, { "type": "R", "before": "obtain a semi-closed form for European option prices , based on Lewis ' approach. We deeply analyze", "after": "compute option prices exploiting Lewis and Lipton formula. We analyze in detail", "start_char_pos": 428, "end_char_pos": 527 }, { "type": "R", "before": "implement a conceptually", "after": "also compute the analytical expressions of the first four cumulants, being crucial to implement a", "start_char_pos": 651, "end_char_pos": 675 }, { "type": "R", "before": "which considerably reduces the computational burden and we test it", "after": ". It has been tested", "start_char_pos": 715, "end_char_pos": 781 }, { "type": "R", "before": "Our results show", "after": "The analysis we present reveals", "start_char_pos": 848, "end_char_pos": 864 }, { "type": "R", "before": "data for all the considered models. In particular, the fitted parameters suggest the risk neutral dynamics is in a low volatility fluctuation regime, which supports the reliability", "after": "implied surfaces and corroborates the accuracy", "start_char_pos": 898, "end_char_pos": 1078 } ]
[ 0, 165, 409, 509, 647, 847, 933 ]
0905.1882
2
We consider the problem of option pricing under stochastic volatility models, focusing on the linear approximation of the two processes known as exponential Ornstein-Uhlenbeck and Stein-Stein. Indeed, we show they admit the same limit dynamics in the regime of low fluctuations of the volatility process, under which we derive the exact expression of the characteristic function associated to the risk neutral probability density. Its knowledge allows us to compute option prices exploiting Lewis and Lipton formula . We analyze in detail the case of Plain Vanilla calls, being liquid instruments for which reliable implied volatility surfaces are available. We also compute the analytical expressions of the first four cumulants, being crucial to implement a simple two steps calibration procedure. It has been tested against a data set of options traded on the Milan Stock Exchange. The analysis we present reveals a good agreement with the market implied surfaces and corroborates the accuracy of the linear approximation.
We consider the problem of option pricing under stochastic volatility models, focusing on the linear approximation of the two processes known as exponential Ornstein-Uhlenbeck and Stein-Stein. Indeed, we show they admit the same limit dynamics in the regime of low fluctuations of the volatility process, under which we derive the exact expression of the characteristic function associated to the risk neutral probability density. This expression allows us to compute option prices exploiting a formula derived by Lewis and Lipton . We analyze in detail the case of Plain Vanilla calls, being liquid instruments for which reliable implied volatility surfaces are available. We also compute the analytical expressions of the first four cumulants, that are crucial to implement a simple two steps calibration procedure. It has been tested against a data set of options traded on the Milan Stock Exchange. The data analysis that we present reveals a good fit with the market implied surfaces and corroborates the accuracy of the linear approximation.
[ { "type": "R", "before": "Its knowledge", "after": "This expression", "start_char_pos": 431, "end_char_pos": 444 }, { "type": "A", "before": null, "after": "a formula derived by", "start_char_pos": 491, "end_char_pos": 491 }, { "type": "D", "before": "formula", "after": null, "start_char_pos": 509, "end_char_pos": 516 }, { "type": "R", "before": "being", "after": "that are", "start_char_pos": 732, "end_char_pos": 737 }, { "type": "R", "before": "analysis", "after": "data analysis that", "start_char_pos": 890, "end_char_pos": 898 }, { "type": "R", "before": "agreement", "after": "fit", "start_char_pos": 925, "end_char_pos": 934 } ]
[ 0, 192, 430, 518, 659, 800, 885 ]
0905.2669
1
The questions in the experimental and theoretical research in the field of interaction of weak influences of various fields with biological objects in conditions when an intense of the influence is smalland weak perturbation are analyzed. The type of resonant interactions connected with these processes is considered. The new method of decision of the k_BT problem in magnetobiology is offered. On the basis of results of the previous work arXiv:0904.1198%DIFDELCMD < ] %%% the analytical expression of energy of a molecule in considered area has been obtained. Numerical estimations of the energy of molecules in capillary and aorta volume are resulted: for the values of relaxation rate \Lambda=0.5 \lambda, 0.05 \lambda, and 0.005 \lambda we obtain the energies \varepsilon=2, 20 and 200 k_BT, respectively, for the molecules of capillaries and \varepsilon=2\cdot 10^{-9} , 20\cdot 10^{-9, and 200\cdot 10^{-9} k_BT, respectively, for the molecules of the aorta. The } capillaries are very sensitive to the resonance effectand the average energy of the molecule localized in the capillary is increased by several orders of magnitude as compared to its thermal energy . This value of the energy is sufficient for the deterioration of the chemical bonds . Even if the magnetic field value is not so near to the resonance value of the magnetic field a significant effect can be reached with an increase of the time of exposition to the magnetic field .
The effect of ultralow-frequency or static magnetic and electric fields on biological processes is of huge interest for researchers due to the resonant change of the intensity of biochemical reactions although the energy in such fields is small. A simplified model to study the effect of the weak magnetic and electrical fields on fluctuation of the random ionic currents in blood and to solve the k_BT problem in magnetobiology is %DIFDELCMD < ] %%% suggested. The analytic expression for the kinetic energy of the molecules dissolved in certain liquid media is obtained. The values of the magnetic field leading to resonant effects in capillaries are estimated. The numerical estimates showed that the resonant values of the energy of molecular in the capillaries and aorta are different: under identical conditions a molecule of the aorta gets 10^{-9} , and 200\cdot 10^{-9} k_BT, respectively, for the molecules of the aorta. The } times less energy than the molecules in blood capillaries. So the capillaries are very sensitive to the resonant effect, with an approach to the resonant value of the magnetic field strength, the average energy of the molecule localized in the capillary is increased by several orders of magnitude as compared to its thermal energy , this value of the energy is sufficient for the deterioration of the chemical bonds .
[ { "type": "R", "before": "questions in the experimental and theoretical research in the field of interaction of weak influences of various fields with biological objects in conditions when an intense of", "after": "effect of ultralow-frequency or static magnetic and electric fields on biological processes is of huge interest for researchers due to the resonant change of the intensity of biochemical reactions although the energy in such fields is small. A simplified model to study the effect of the weak magnetic and electrical fields on fluctuation of", "start_char_pos": 4, "end_char_pos": 180 }, { "type": "R", "before": "influence is smalland weak perturbation are analyzed. The type of resonant interactions connected with these processes is considered. The new method of decision of the", "after": "random ionic currents in blood and to solve the", "start_char_pos": 185, "end_char_pos": 352 }, { "type": "D", "before": "offered. On the basis of results of the previous work", "after": null, "start_char_pos": 387, "end_char_pos": 440 }, { "type": "D", "before": "arXiv:0904.1198", "after": null, "start_char_pos": 441, "end_char_pos": 456 }, { "type": "R", "before": "the analytical expression of energy of a molecule in considered area has been obtained. Numerical estimations of the energy of molecules in capillary and aorta volume are resulted: for the values of relaxation rate \\Lambda=0.5 \\lambda, 0.05 \\lambda, and 0.005 \\lambda we obtain the energies \\varepsilon=2, 20 and 200 k_BT, respectively, for the molecules of capillaries and \\varepsilon=2\\cdot", "after": "suggested. The analytic expression for the kinetic energy of the molecules dissolved in certain liquid media is obtained. The values of the magnetic field leading to resonant effects in capillaries are estimated. The numerical estimates showed that the resonant values of the energy of molecular in the capillaries and aorta are different: under identical conditions a molecule of the aorta gets", "start_char_pos": 475, "end_char_pos": 867 }, { "type": "D", "before": ", 20\\cdot 10^{-9", "after": null, "start_char_pos": 876, "end_char_pos": 892 }, { "type": "A", "before": null, "after": "times less energy than the molecules in blood capillaries. So the", "start_char_pos": 973, "end_char_pos": 973 }, { "type": "R", "before": "resonance effectand the", "after": "resonant effect, with an approach to the resonant value of the magnetic field strength, the", "start_char_pos": 1012, "end_char_pos": 1035 }, { "type": "R", "before": ". This", "after": ", this", "start_char_pos": 1172, "end_char_pos": 1178 }, { "type": "D", "before": ". Even if the magnetic field value is not so near to the resonance value of the magnetic field a significant effect can be reached with an increase of the time of exposition to the magnetic field", "after": null, "start_char_pos": 1257, "end_char_pos": 1452 } ]
[ 0, 238, 318, 395, 562, 966, 1173, 1258 ]
0905.3701
2
The stochastic exponential Z_t=\exp\{M_t-M_0-(1/2) <M,M>_t\} of a continuous local martingale M is itself a continuous local martingale. We give a necessary and sufficient condition for the process Z to be a true martingale in the case where M_t=\int_0^t b(Y_u) dW_u and Y is a one-dimensional diffusion driven by a Brownian motion ~ W. Furthermore, we provide a necessary and sufficient condition for Z to be a uniformly integrable martingale in the same setting. These conditions are deterministic and expressed only in terms of the function b and the drift and diffusion coefficients of ~ Y. As an application we provide a deterministic criterion for the absence of bubbles in a one-dimensional setting.
The stochastic exponential Z_t=\exp\{M_t-M_0-(1/2) <M,M>_t\} of a continuous local martingale M is itself a continuous local martingale. We give a necessary and sufficient condition for the process Z to be a true martingale in the case where M_t=\int_0^t b(Y_u) \, dW_u and Y is a one-dimensional diffusion driven by a Brownian motion W. Furthermore, we provide a necessary and sufficient condition for Z to be a uniformly integrable martingale in the same setting. These conditions are deterministic and expressed only in terms of the function b and the drift and diffusion coefficients of Y. As an application we provide a deterministic criterion for the absence of bubbles in a one-dimensional setting.
[ { "type": "A", "before": null, "after": "\\,", "start_char_pos": 262, "end_char_pos": 262 }, { "type": "D", "before": "~", "after": null, "start_char_pos": 333, "end_char_pos": 334 }, { "type": "D", "before": "~", "after": null, "start_char_pos": 591, "end_char_pos": 592 } ]
[ 0, 136, 465 ]
0906.0208
1
We prove existence and uniqueness of stochastic equilibria in a representative class of incomplete continuous-time financial environments where the market participants are exponential utility maximizers with heterogeneous risk-aversion coefficients and general random endowments. The incompleteness featured in our setting - the source of which can be thought of as a credit event or a catastrophe - is genuine in the sense that not only the prices, but also the family of replicable claims itself is determined as a part of the equilibrium. Consequently, the usual approach which employs the a-posteriori Pareto optimality of equilibrium allocations and the related representative-agent techniques in the complete-market setting cannot be used. Instead, we follow a novel route based on new stability results for a class of semilinear partial differential equations related to the Hamilton-Jacobi-Bellman equation for the agents' utility-maximization problems. This approach leads to a reformulation of the problem where the Banach fixed point theorem can be used not only to show existence and uniqueness, but also to provide a simple and efficient numerical procedure for its computation.
We prove existence and uniqueness of stochastic equilibria in a class of incomplete continuous-time financial environments where the market participants are exponential utility maximizers with heterogeneous risk-aversion coefficients and general Markovian random endowments. The incompleteness featured in our setting - the source of which can be thought of as a credit event or a catastrophe - is genuine in the sense that not only the prices, but also the family of replicable claims itself is determined as a part of the equilibrium. Consequently, equilibrium allocations are not necessarily Pareto optimal and the related representative-agent techniques cannot be used. Instead, we follow a novel route based on new stability results for a class of semilinear partial differential equations related to the Hamilton-Jacobi-Bellman equation for the agents' utility-maximization problems. This approach leads to a reformulation of the problem where the Banach fixed point theorem can be used not only to show existence and uniqueness, but also to provide a simple and efficient numerical procedure for its computation.
[ { "type": "D", "before": "representative", "after": null, "start_char_pos": 64, "end_char_pos": 78 }, { "type": "A", "before": null, "after": "Markovian", "start_char_pos": 261, "end_char_pos": 261 }, { "type": "R", "before": "the usual approach which employs the a-posteriori Pareto optimality of equilibrium allocations", "after": "equilibrium allocations are not necessarily Pareto optimal", "start_char_pos": 557, "end_char_pos": 651 }, { "type": "D", "before": "in the complete-market setting", "after": null, "start_char_pos": 700, "end_char_pos": 730 } ]
[ 0, 280, 542, 746, 962 ]
0906.0557
1
We present a set of five axioms for fairness measures in resource allocation. A family of fairness measures satisfying the axioms is constructed. Well-known notions such as alpha-fairness, Jain's index and entropy are shown to be special cases. Properties of fairness measures satisfying the axioms are proven, including Schur-concavity. Among the engineering implications is a new understanding of alpha-fair utility functions and an interpretation of "larger alpha is more fair" .
We present a set of five axioms for fairness measures in resource allocation. A family of fairness measures satisfying the axioms is constructed. Well-known notions such as alpha-fairness, Jain's index , and entropy are shown to be special cases. Properties of fairness measures satisfying the axioms are proven, including Schur-concavity. Among the engineering implications is a generalized Jain's index that tunes the resolution of the fairness measure, a new understanding of alpha-fair utility functions , and an interpretation of "larger alpha is more fair" . We also construct an alternative set of four axioms to capture efficiency objectives and feasibility constraints .
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 202, "end_char_pos": 202 }, { "type": "A", "before": null, "after": "generalized Jain's index that tunes the resolution of the fairness measure, a", "start_char_pos": 379, "end_char_pos": 379 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 430, "end_char_pos": 430 }, { "type": "A", "before": null, "after": ". We also construct an alternative set of four axioms to capture efficiency objectives and feasibility constraints", "start_char_pos": 484, "end_char_pos": 484 } ]
[ 0, 77, 145, 245, 338 ]
0906.2100
1
Consider two insurance companies (or two branches of the same company) that have the same claims and they divide premia in some specified proportions. We model the occurrence of claims according to a Poisson process. The ruin is achieved if the corresponding two-dimensional risk process first leave the positive quadrant. We consider different kinds of linear barriers. We will consider two scenarios of controlled process . In first one when two-dimensional risk process hits the barrier the minimal amount of dividends is payed out to keep the risk process within the regionbounded by the barrier . In the second scenario whenever process hits horizontal line, the risk process is reduced by paying dividend to some fixed point in the positive quadrant and waits there for the first claim to arrive. In both models we calculate discounted cumulative dividend payments until the ruin time .
Consider two insurance companies (or two branches of the same company) that have the same claims and they divide premia in some specified proportions. We model the occurrence of claims according to a Poisson process. The ruin is achieved if the corresponding two-dimensional risk process first leave the positive quadrant. We will consider two scenarios of controlled process : refraction and the impulse control . In first one the dividends are payed out when two-dimensional risk process exit fixed region . In the second scenario whenever process hits horizontal line, the risk process is reduced by paying dividend to some fixed point in the positive quadrant and waits there for the first claim to arrive. In both models we calculate discounted cumulative dividend payments until the ruin time . This paper is an attempt at understanding the effect of dependencies of two portfolios on the the join optimal strategy of paying dividends. For the proportional reinsurance we can observe for example the interesting phenomenon that is dependence of choice of the optimal barrier on the initial reserves. This is a contrast to the one-dimensional Cram\'{e .
[ { "type": "D", "before": "consider different kinds of linear barriers. We", "after": null, "start_char_pos": 326, "end_char_pos": 373 }, { "type": "A", "before": null, "after": ": refraction and the impulse control", "start_char_pos": 424, "end_char_pos": 424 }, { "type": "A", "before": null, "after": "the dividends are payed out", "start_char_pos": 440, "end_char_pos": 440 }, { "type": "R", "before": "two-dimensional risk process hits the barrier the minimal amount of dividends is payed out to keep the risk process within the regionbounded by the barrier", "after": "two-dimensional risk process exit fixed region", "start_char_pos": 446, "end_char_pos": 601 }, { "type": "A", "before": null, "after": ". This paper is an attempt at understanding the effect of dependencies of two portfolios on the the join optimal strategy of paying dividends. For the proportional reinsurance we can observe for example the interesting phenomenon that is dependence of choice of the optimal barrier on the initial reserves. This is a contrast to the one-dimensional Cram\\'{e", "start_char_pos": 893, "end_char_pos": 893 } ]
[ 0, 150, 216, 322, 370, 426, 603, 804 ]
0906.2100
2
Consider two insurance companies (or two branches of the same company) that have the same claims and they divide premia in some specified proportions . We model the occurrence of claims according to a Poisson process. The ruin is achieved if the corresponding two-dimensional risk process first leave the positive quadrant. We will consider two scenarios of controlled process: refraction and the impulse control. In first one the dividends are payed out when two-dimensional risk process exit fixed region. In the second scenario whenever process hits horizontal line, the risk process is reduced by paying dividend to some fixed point in the positive quadrant and waits there for the first claim to arrive. In both models we calculate discounted cumulative dividend payments until the ruin time . This paper is an attempt at understanding the effect of dependencies of two portfolios on the the join optimal strategy of paying dividends. For the proportional reinsurance we can observe for example the interesting phenomenon that is dependence of choice of the optimal barrier on the initial reserves. This is a contrast to the one-dimensional Cram\'{e}r-Lundberg model where the optimal choice of barrier among optimal barrier strategies is uniform for all initial reserves.
Consider two insurance companies (or two branches of the same company) that receive premiums at different rates and then split the amount they pay in fixed proportions for each claim (for simplicity we assume that they are equal) . We model the occurrence of claims according to a Poisson process. The ruin is achieved when the corresponding two-dimensional risk process first leaves the positive quadrant. We will consider two scenarios of the controlled process: refraction and impulse control. In the first case the dividends are payed out when the two-dimensional risk process exits the fixed region. In the second scenario , whenever the process hits the horizontal line, it is reduced by paying dividends to some fixed point in the positive quadrant where it waits for the next claim to arrive. In both models we calculate the discounted cumulative dividend payments until the ruin . This paper is the first attempt to understand the effect of dependencies of two portfolios on the joint optimal strategy of paying dividends. For example in case of proportional reinsurance one can observe the interesting phenomenon that choice of the optimal barrier depends on the initial reserves. This is in contrast with the one-dimensional Cram\'{e}r-Lundberg model where the optimal choice of the barrier is uniform for all initial reserves.
[ { "type": "R", "before": "have the same claims and they divide premia in some specified proportions", "after": "receive premiums at different rates and then split the amount they pay in fixed proportions for each claim (for simplicity we assume that they are equal)", "start_char_pos": 76, "end_char_pos": 149 }, { "type": "R", "before": "if", "after": "when", "start_char_pos": 239, "end_char_pos": 241 }, { "type": "R", "before": "leave", "after": "leaves", "start_char_pos": 295, "end_char_pos": 300 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 358, "end_char_pos": 358 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 394, "end_char_pos": 397 }, { "type": "R", "before": "first one", "after": "the first case", "start_char_pos": 418, "end_char_pos": 427 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 461, "end_char_pos": 461 }, { "type": "R", "before": "exit", "after": "exits the", "start_char_pos": 491, "end_char_pos": 495 }, { "type": "R", "before": "whenever process hits", "after": ", whenever the process hits the", "start_char_pos": 533, "end_char_pos": 554 }, { "type": "R", "before": "the risk process", "after": "it", "start_char_pos": 572, "end_char_pos": 588 }, { "type": "R", "before": "dividend", "after": "dividends", "start_char_pos": 610, "end_char_pos": 618 }, { "type": "R", "before": "and waits there for the first", "after": "where it waits for the next", "start_char_pos": 664, "end_char_pos": 693 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 739, "end_char_pos": 739 }, { "type": "D", "before": "time", "after": null, "start_char_pos": 795, "end_char_pos": 799 }, { "type": "R", "before": "an attempt at understanding", "after": "the first attempt to understand", "start_char_pos": 816, "end_char_pos": 843 }, { "type": "R", "before": "the join", "after": "joint", "start_char_pos": 896, "end_char_pos": 904 }, { "type": "R", "before": "the proportional reinsurance we can observe for example", "after": "example in case of proportional reinsurance one can observe", "start_char_pos": 947, "end_char_pos": 1002 }, { "type": "D", "before": "is dependence of", "after": null, "start_char_pos": 1035, "end_char_pos": 1051 }, { "type": "A", "before": null, "after": "depends", "start_char_pos": 1082, "end_char_pos": 1082 }, { "type": "R", "before": "a contrast to", "after": "in contrast with", "start_char_pos": 1116, "end_char_pos": 1129 }, { "type": "R", "before": "barrier among optimal barrier strategies", "after": "the barrier", "start_char_pos": 1204, "end_char_pos": 1244 } ]
[ 0, 151, 217, 323, 414, 509, 710, 801, 942, 1107 ]
0906.2311
1
In this paper we present and study the connectivity problem for wireless networks under the Signal to Interference plus Noise Ratio (SINR) model. Given a set of radio transmitters distributed in some area, we seek to build a directed strongly connected communication graph, and compute an edge coloring of this graph such that the transmitter-receiver pairs in each color class can communicate simultaneously. Depending on the interference model, more or less colors, corresponding to the number of frequencies or time slots, are necessary. We consider the SINR model that compares the received power of a signal at a receiver to the sum of the strength of other signals plus ambient noise . The strength of a signal is assumed to fade polynomially with the distance from the sender, depending on the so-called path-loss exponent \alpha. We show that, when all transmitters use the same power, the number of colors needed is constant in one-dimensional grids if \alpha>1 as well as in two-dimensional grids if \alpha>2. For smaller path-loss exponents and two-dimensional grids we prove upper and lower bounds in the order of O(\log n) and \Omega(\log n/\log\log n) for \alpha=2 and \Theta(n^{ 1-\alpha/ 2 }) for \alpha<2 respectively. If nodes are distributed uniformly at random on the interval [0,1], a regular coloring of O(\log n) colors guarantees connectivity, while \Omega(\log \log n) colors are required for any coloring.
In this paper we study the connectivity problem for wireless networks under the Signal to Interference plus Noise Ratio (SINR) model. Given a set of radio transmitters distributed in some area, we seek to build a directed strongly connected communication graph, and compute an edge coloring of this graph such that the transmitter-receiver pairs in each color class can communicate simultaneously. Depending on the interference model, more or less colors, corresponding to the number of frequencies or time slots, are necessary. We consider the SINR model that compares the received power of a signal at a receiver to the sum of the strength of other signals plus ambient noise . The strength of a signal is assumed to fade polynomially with the distance from the sender, depending on the so-called path-loss exponent \alpha. We show that, when all transmitters use the same power, the number of colors needed is constant in one-dimensional grids if \alpha>1 as well as in two-dimensional grids if \alpha>2. For smaller path-loss exponents and two-dimensional grids we prove upper and lower bounds in the order of O(\log n) and \Omega(\log n/\log\log n) for \alpha=2 and \Theta(n^{ 2 /\alpha-1 }) for \alpha<2 respectively. If nodes are distributed uniformly at random on the interval [0,1], a regular coloring of O(\log n) colors guarantees connectivity, while \Omega(\log \log n) colors are required for any coloring.
[ { "type": "D", "before": "present and", "after": null, "start_char_pos": 17, "end_char_pos": 28 }, { "type": "D", "before": "1-\\alpha/", "after": null, "start_char_pos": 1194, "end_char_pos": 1203 }, { "type": "A", "before": null, "after": "/\\alpha-1", "start_char_pos": 1206, "end_char_pos": 1206 } ]
[ 0, 145, 409, 540, 691, 837, 1019, 1236 ]
0906.3178
1
The microtubule cortical array is a structure consisting of highly aligned microtubules , observed in all growing plant cells, which plays a crucial role in the characteristic plant cell growth by uniaxial expansion along the axis perpendicular to the microtubules. To investigate the orientational ordering of microtubulesin this system, we present both a coarse-grained theoretical model and stochastic particle-based simulations , and compare the results from these complementary approaches. Our results indicate that collisions that induce depolymerization are the main driving factor in the alignment of microtubules in the cortical array.
The cortical array is a structure consisting of highly aligned microtubules which plays a crucial role in the characteristic uniaxial expansion of all growing plant cells. Recent experiments have shown polymerization-driven collisions between the membrane-bound cortical microtubules, suggesting a possible mechanism for their alignment. We present both a coarse-grained theoretical model and stochastic particle-based simulations of this mechanism , and compare the results from these complementary approaches. Our results indicate that collisions that induce depolymerization are sufficient to generate the alignment of microtubules in the cortical array.
[ { "type": "D", "before": "microtubule", "after": null, "start_char_pos": 4, "end_char_pos": 15 }, { "type": "D", "before": ", observed in all growing plant cells,", "after": null, "start_char_pos": 88, "end_char_pos": 126 }, { "type": "R", "before": "plant cell growth by uniaxial expansion along the axis perpendicular to the microtubules. To investigate the orientational ordering of microtubulesin this system, we", "after": "uniaxial expansion of all growing plant cells. Recent experiments have shown polymerization-driven collisions between the membrane-bound cortical microtubules, suggesting a possible mechanism for their alignment. We", "start_char_pos": 176, "end_char_pos": 341 }, { "type": "A", "before": null, "after": "of this mechanism", "start_char_pos": 432, "end_char_pos": 432 }, { "type": "R", "before": "the main driving factor in", "after": "sufficient to generate", "start_char_pos": 566, "end_char_pos": 592 } ]
[ 0, 265, 495 ]
0906.3841
1
Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a simple model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape, and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday, but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student t-distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.
Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape, and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday, but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.
[ { "type": "D", "before": "simple", "after": null, "start_char_pos": 149, "end_char_pos": 155 }, { "type": "R", "before": "t-distributed", "after": "distributed", "start_char_pos": 767, "end_char_pos": 780 } ]
[ 0, 129, 380, 552, 727, 806 ]
0906.4861
1
A transition rate model of cargo transportation by N effective molecular motors is proposed. Under the assumption of steady state, the force-velocity curve of multi-motor system can be derived from the force-velocity curve of single motor. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor number, which is dependent on the single motor force-velocity curve. And most commonly, the velocity decreases. This gives a possible explanation to some recent experimental observations.
A transition rate model of cargo transport by N molecular motors is proposed. Under the assumption of steady state, the force-velocity curve of multi-motor system can be derived from the force-velocity curve of single motor. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor number, which is dependent on the single motor force-velocity curve. And most commonly, the velocity decreases. This gives a possible explanation to some recent
[ { "type": "R", "before": "transportation by N effective", "after": "transport by N", "start_char_pos": 33, "end_char_pos": 62 }, { "type": "D", "before": "experimental observations.", "after": null, "start_char_pos": 524, "end_char_pos": 550 } ]
[ 0, 92, 239, 431, 474 ]
0907.0941
1
In this paper we consider a class of BSDE with drivers of quadratic growth, on a stochastic basis generated by continuous local martingales. We first derive the Markov property of a forward-backward system (FBSDE) if the generating martingale is a strong Markov process. Then we establish the differentiability of a FBSDE with respect to the initial value of its forward component. This enables us to obtain the main result of this article which from the perspective of a utility optimization interpretation of the underlying control problem on a financial market takes the following form. The control process of the BSDE steers the system into a random liabilitydepending on a market external uncertainty and this way describes the optimal derivative hedge of the liability by investment in a capital market the dynamics of which is described by the forward component. This delta hedge is described in a key formula in terms of a derivative functional of the solution process and the correlation structure of the internal uncertainty captured by the forward process and the external uncertainty responsible for the market incompleteness . The formula largely extends the scope of validity of the results obtained by several authors in the Brownian setting , designed to give a genuinely stochastic representation of the optimal delta hedge in the context of cross hedging insurance derivatives generalizing the derivative hedge in the Black-Scholes model . Of course , Malliavin's calculus needed in the Brownian setting is not available in the general local martingale framework . We replace it by new tools based on stochastic calculus techniques .
In this paper we consider a class of BSDEs with drivers of quadratic growth, on a stochastic basis generated by continuous local martingales. We first derive the Markov property of a forward-backward system (FBSDE) if the generating martingale is a strong Markov process. Then we establish the differentiability of a FBSDE with respect to the initial value of its forward component. This enables us to obtain the main result of this article , namely a representation formula for the control component of its solution. The latter is relevant in the context of securitization of random liabilities arising from exogenous risk, which are optimally hedged by investment in a given financial market with respect to exponential preferences. In a purely stochastic formulation, the control process of the backward component of the FBSDE steers the system into the random liability, and describes its optimal derivative hedge by investment in the capital market the dynamics of which is given by the forward component. The representation formula of the main result describes this delta hedge in terms of the derivative of the BSDE's solution process on the one hand, and the correlation structure of the internal uncertainty captured by the forward process and the external uncertainty responsible for the market incompleteness on the other hand . The formula extends the scope of validity of the results obtained by several authors in the Brownian setting . It is designed to extend a genuinely stochastic representation of the optimal replication in cross hedging insurance derivatives from the classical Black-Scholes model to incomplete markets on general stochastic bases. In this setting , Malliavin's calculus which is required in the Brownian framework is replaced by new tools based on techniques related to a calculus of quadratic covariations of basis martingales .
[ { "type": "R", "before": "BSDE", "after": "BSDEs", "start_char_pos": 37, "end_char_pos": 41 }, { "type": "R", "before": "which from the perspective of a utility optimization interpretation of the underlying control problem on a financial market takes the following form. The", "after": ", namely a representation formula for the", "start_char_pos": 440, "end_char_pos": 593 }, { "type": "A", "before": null, "after": "component of its solution. The latter is relevant in the context of securitization of random liabilities arising from exogenous risk, which are optimally hedged by investment in a given financial market with respect to exponential preferences. In a purely stochastic formulation, the control", "start_char_pos": 602, "end_char_pos": 602 }, { "type": "R", "before": "BSDE", "after": "backward component of the FBSDE", "start_char_pos": 618, "end_char_pos": 622 }, { "type": "R", "before": "a random liabilitydepending on a market external uncertainty and this way describes the", "after": "the random liability, and describes its", "start_char_pos": 646, "end_char_pos": 733 }, { "type": "D", "before": "of the liability", "after": null, "start_char_pos": 759, "end_char_pos": 775 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 793, "end_char_pos": 794 }, { "type": "R", "before": "described", "after": "given", "start_char_pos": 835, "end_char_pos": 844 }, { "type": "R", "before": "This delta hedge is described in a key formula in terms of a derivative functional of the solution process", "after": "The representation formula of the main result describes this delta hedge in terms of the derivative of the BSDE's solution process on the one hand,", "start_char_pos": 871, "end_char_pos": 977 }, { "type": "A", "before": null, "after": "on the other hand", "start_char_pos": 1139, "end_char_pos": 1139 }, { "type": "D", "before": "largely", "after": null, "start_char_pos": 1154, "end_char_pos": 1161 }, { "type": "R", "before": ", designed to give", "after": ". It is designed to extend", "start_char_pos": 1259, "end_char_pos": 1277 }, { "type": "R", "before": "delta hedge in the context of", "after": "replication in", "start_char_pos": 1331, "end_char_pos": 1360 }, { "type": "R", "before": "generalizing the derivative hedge in the", "after": "from the classical", "start_char_pos": 1397, "end_char_pos": 1437 }, { "type": "R", "before": ". Of course", "after": "to incomplete markets on general stochastic bases. In this setting", "start_char_pos": 1458, "end_char_pos": 1469 }, { "type": "R", "before": "needed in the Brownian setting is not available in the general local martingale framework . We replace it", "after": "which is required in the Brownian framework is replaced", "start_char_pos": 1493, "end_char_pos": 1598 }, { "type": "R", "before": "stochastic calculus techniques", "after": "techniques related to a calculus of quadratic covariations of basis martingales", "start_char_pos": 1621, "end_char_pos": 1651 } ]
[ 0, 140, 270, 381, 589, 870, 1141, 1459, 1584 ]
0907.0941
2
In this paper we consider a class of BSDEs with drivers of quadratic growth, on a stochastic basis generated by continuous local martingales. We first derive the Markov property of a forward-backward system (FBSDE) if the generating martingale is a strong Markov process. Then we establish the differentiability of a FBSDE with respect to the initial value of its forward component. This enables us to obtain the main result of this article, namely a representation formula for the control component of its solution. The latter is relevant in the context of securitization of random liabilities arising from exogenous risk, which are optimally hedged by investment in a given financial market with respect to exponential preferences. In a purely stochastic formulation, the control process of the backward component of the FBSDE steers the system into the random liability , and describes its optimal derivative hedge by investment in the capital market the dynamics of which is given by the forward component . The representation formula of the main result describes this delta hedge in terms of the derivative of the BSDE's solution process on the one hand, and the correlation structure of the internal uncertainty captured by the forward process and the external uncertainty responsible for the market incompleteness on the other hand. The formula extends the scope of validity of the results obtained by several authors in the Brownian setting. It is designed to extend a genuinely stochastic representation of the optimal replication in cross hedging insurance derivatives from the classical Black-Scholes model to incomplete markets on general stochastic bases. In this setting, Malliavin's calculus which is required in the Brownian framework is replaced by new tools based on techniques related to a calculus of quadratic covariations of basis martingales .
In this paper we consider a class of BSDEs with drivers of quadratic growth, on a stochastic basis generated by continuous local martingales. We first derive the Markov property of a forward--backward system (FBSDE) if the generating martingale is a strong Markov process. Then we establish the differentiability of a FBSDE with respect to the initial value of its forward component. This enables us to obtain the main result of this article, namely a representation formula for the control component of its solution. The latter is relevant in the context of securitization of random liabilities arising from exogenous risk, which are optimally hedged by investment in a given financial market with respect to exponential preferences. In a purely stochastic formulation, the control process of the backward component of the FBSDE steers the system into the random liability and describes its optimal derivative hedge by investment in the capital market , the dynamics of which is given by the forward component .
[ { "type": "R", "before": "forward-backward", "after": "forward--backward", "start_char_pos": 183, "end_char_pos": 199 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 873, "end_char_pos": 874 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 954, "end_char_pos": 954 }, { "type": "D", "before": ". The representation formula of the main result describes this delta hedge in terms of the derivative of the BSDE's solution process on the one hand, and the correlation structure of the internal uncertainty captured by the forward process and the external uncertainty responsible for the market incompleteness on the other hand. The formula extends the scope of validity of the results obtained by several authors in the Brownian setting. It is designed to extend a genuinely stochastic representation of the optimal replication in cross hedging insurance derivatives from the classical Black-Scholes model to incomplete markets on general stochastic bases. In this setting, Malliavin's calculus which is required in the Brownian framework is replaced by new tools based on techniques related to a calculus of quadratic covariations of basis martingales", "after": null, "start_char_pos": 1011, "end_char_pos": 1865 } ]
[ 0, 141, 271, 382, 516, 733, 1012, 1340, 1450, 1669 ]
0907.2926
1
We present a method for constructing new families of solvable one-dimensional diffusions with linear drift and nonlinear diffusion coefficient functions, whose transition densities are obtainable in analytically closed-form. Our approach is based on the so-called diffusion canonical transformation method that allows us to uncover new multiparameter diffusions that are mapped onto various simpler underlying diffusions. We give a simple rigorous boundary classification and characterization of the newly constructed processes with respect to the martingale property. Specifically, we construct, analyse and classify three new families of nonlinear diffusion models with affine drift that arise from the squared Bessel process (the Bessel family), the CIR process (the confluent hypergeometric family), and the Ornstein-Uhlenbeck diffusion (the OU family) .
We present new extensions to a method for constructing several families of solvable one-dimensional time-homogeneous diffusions whose transition densities are obtainable in analytically closed-form. Our approach is based on a dual application of the so-called diffusion canonical transformation method that combines smooth monotonic mappings and measure changes via Doob-h transforms. This gives rise to new multi-parameter solvable diffusions that are generally divided into two main classes; the first is specified by having affine (linear) drift with various resulting nonlinear diffusion coefficient functions, while the second class allows for several specifications of a (generally nonlinear) diffusion coefficient with resulting nonlinear drift function. The theory is applicable to diffusions with either singular and/or non-singular endpoints. As part of the results in this paper, we also present a complete boundary classification and martingale characterization of the newly developed diffusion families .
[ { "type": "A", "before": null, "after": "new extensions to", "start_char_pos": 11, "end_char_pos": 11 }, { "type": "R", "before": "new", "after": "several", "start_char_pos": 38, "end_char_pos": 41 }, { "type": "R", "before": "diffusions with linear drift and nonlinear diffusion coefficient functions,", "after": "time-homogeneous diffusions", "start_char_pos": 79, "end_char_pos": 154 }, { "type": "A", "before": null, "after": "a dual application of", "start_char_pos": 251, "end_char_pos": 251 }, { "type": "R", "before": "allows us to uncover new multiparameter", "after": "combines smooth monotonic mappings and measure changes via Doob-h transforms. This gives rise to new multi-parameter solvable", "start_char_pos": 313, "end_char_pos": 352 }, { "type": "R", "before": "mapped onto various simpler underlying diffusions. We give a simple rigorous boundary classification and characterization of the newly constructed processes with respect to the martingale property. Specifically, we construct, analyse and classify three new families of nonlinear diffusion models with affine drift that arise from the squared Bessel process (the Bessel family), the CIR process (the confluent hypergeometric family), and the Ornstein-Uhlenbeck diffusion (the OU family)", "after": "generally divided into two main classes; the first is specified by having affine (linear) drift with various resulting nonlinear diffusion coefficient functions, while the second class allows for several specifications of a (generally nonlinear) diffusion coefficient with resulting nonlinear drift function. The theory is applicable to diffusions with either singular and/or non-singular endpoints. As part of the results in this paper, we also present a complete boundary classification and martingale characterization of the newly developed diffusion families", "start_char_pos": 373, "end_char_pos": 858 } ]
[ 0, 225, 423, 570 ]
0907.3810
1
Diffusive processes describe a large set of phenomena occurring on natural and social systems modeled in terms of complex weighted networks. Here we introduce a general formalism that allows to easily write down mean-field equations for any diffusive dynamics on weighted networks , and to recognize the concept of an annealed weighted network , in which such equations become exact. The analysis of the simple random walk process reveals a strong departure of its behavior in quenched real scale-free networks from the mean-field predictions. Our work sheds light on mean-field theory on weighted networks and on its range of validity, warning about the reliability of mean-field results for more complex dynamics.
Diffusion is a key element of a large set of phenomena occurring on natural and social systems modeled in terms of complex weighted networks. Here , we introduce a general formalism that allows to easily write down mean-field equations for any diffusive dynamics on weighted networks . We also propose the concept of annealed weighted networks , in which such equations become exact. We show the validity of our approach addressing the problem of the random walk process , pointing out a strong departure of the behavior observed in quenched real scale-free networks from the mean-field predictions. Additionally, we show how to employ our formalism for more complex dynamics. Our work sheds light on mean-field theory on weighted networks and on its range of validity, and warns about the reliability of mean-field results for complex dynamics.
[ { "type": "R", "before": "Diffusive processes describe a", "after": "Diffusion is a key element of a", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 146, "end_char_pos": 146 }, { "type": "R", "before": ", and to recognize", "after": ". We also propose", "start_char_pos": 282, "end_char_pos": 300 }, { "type": "R", "before": "an annealed weighted network", "after": "annealed weighted networks", "start_char_pos": 316, "end_char_pos": 344 }, { "type": "R", "before": "The analysis of the simple", "after": "We show the validity of our approach addressing the problem of the", "start_char_pos": 385, "end_char_pos": 411 }, { "type": "R", "before": "reveals", "after": ", pointing out", "start_char_pos": 432, "end_char_pos": 439 }, { "type": "R", "before": "its behavior", "after": "the behavior observed", "start_char_pos": 462, "end_char_pos": 474 }, { "type": "A", "before": null, "after": "Additionally, we show how to employ our formalism for more complex dynamics.", "start_char_pos": 545, "end_char_pos": 545 }, { "type": "R", "before": "warning", "after": "and warns", "start_char_pos": 639, "end_char_pos": 646 }, { "type": "D", "before": "more", "after": null, "start_char_pos": 695, "end_char_pos": 699 } ]
[ 0, 140, 384, 544 ]
0908.0348
1
We use a model of proportionate growth to describe the dynamics of international trade flows. We provide an explanation to the fact that the extensive margin of trade account for a large fraction of the greater exports of large economies, as well as for a number of stylized facts described by the literature on trade networks such as the power-law distribution of connectivity and the fat tails displayed by the distribution of the growth rates of trade flows. Hence, such a simple setup is able to capture the dynamics of very different economic variables, from firm size (as shown in the URLanization literature) to international trade flows (as documented here). Furthermore, we provide an additional element to the discussion on the relative ability of different international trade models to adequately match empirical regularities .
We develop a simple theoretical framework for the evolution of weighted network that is consistent with a number of stylized features of real-world data. In our framework, the Barabasi-Albert model of network evolution is extended as recently proposed by Stanley and colleagues in their work on complex system dynamics. Our model is verify by means of simulations and real world trade data. We show that the model correctly predicts the intensity and growth distribution of links, the size-variance relationships of the growth of link values, the relationship between the degree and strength of nodes, as well as the scale-free structure of the network .
[ { "type": "R", "before": "use a model of proportionate growth to describe the dynamics of international trade flows. We provide an explanation to the fact that the extensive margin of trade account for a large fraction of the greater exports of large economies, as well as for a number of stylized facts described by the literature on trade networks such as the power-law distribution of connectivity and", "after": "develop a simple theoretical framework for the evolution of weighted network that is consistent with a number of stylized features of real-world data. In our framework,", "start_char_pos": 3, "end_char_pos": 381 }, { "type": "R", "before": "fat tails displayed by the distribution", "after": "Barabasi-Albert model of network evolution is extended as recently proposed by Stanley and colleagues in their work on complex system dynamics. Our model is verify by means of simulations and real world trade data. We show that the model correctly predicts the intensity and growth distribution of links, the size-variance relationships of the growth", "start_char_pos": 386, "end_char_pos": 425 }, { "type": "A", "before": null, "after": "link values,", "start_char_pos": 429, "end_char_pos": 429 }, { "type": "R", "before": "growth rates of trade flows. Hence, such a simple setup is able to capture the dynamics of very different economic variables, from firm size (as shown in the URLanization literature) to international trade flows (as documented here). Furthermore, we provide an additional element to the discussion on the relative ability of different international trade models to adequately match empirical regularities", "after": "relationship between the degree and strength of nodes, as well as the scale-free structure of the network", "start_char_pos": 434, "end_char_pos": 838 } ]
[ 0, 93, 462, 667 ]
0908.0348
2
We develop a simple theoretical framework for the evolution of weighted network that is consistent with a number of stylized features of real-world data. In our framework, the Barabasi-Albert model of network evolution is extended as recently proposed by Stanley and colleagues in their work on complex system dynamics . Our model is verify by means of simulations and real world trade data. We show that the model correctly predicts the intensity and growth distribution of links, the size-variance relationships of the growth of link values , the relationship between the degree and strength of nodes, as well as the scale-free structure of the network.
We develop a simple theoretical framework for the evolution of weighted networks that is consistent with a number of stylized features of real-world data. In our framework, the Barabasi-Albert model of network evolution is extended by assuming that link weights evolve according to a geometric Brownian motion . Our model is verified by means of simulations and real world trade data. We show that the model correctly predicts the intensity and growth distribution of links, the size-variance relationships of the growth of link weights , the relationship between the degree and strength of nodes, as well as the scale-free structure of the network.
[ { "type": "R", "before": "network", "after": "networks", "start_char_pos": 72, "end_char_pos": 79 }, { "type": "R", "before": "as recently proposed by Stanley and colleagues in their work on complex system dynamics", "after": "by assuming that link weights evolve according to a geometric Brownian motion", "start_char_pos": 231, "end_char_pos": 318 }, { "type": "R", "before": "verify", "after": "verified", "start_char_pos": 334, "end_char_pos": 340 }, { "type": "R", "before": "values", "after": "weights", "start_char_pos": 536, "end_char_pos": 542 } ]
[ 0, 153, 320, 391 ]
0908.0657
1
We study the stochastic dynamics of growth and shrinkage of single actin filaments taking into account insertion, removal, and ATP hydrolysis of subunits either according to the vectorial mechanism or to the random mechanism. In a previous work, we developed a model for a single actin or microtubule filament where hydrolysis occurred according to the vectorial mechanism: the filament could grow only from one end, and was in contact with a reservoir of monomers. Here we extend this approach in several ways, by including the dynamics of both ends , by comparing two possible mechanisms of ATP hydrolysis , and by introducing mass conservation for the monomers . Our emphasis is mainly on the role and mechanism of hydrolysis within a single filament . We propose a set of experiments to test the nature of the precise mechanism of hydrolysis within actin filaments.
We study the stochastic dynamics of growth and shrinkage of single actin filaments taking into account insertion, removal, and ATP hydrolysis of subunits either according to the vectorial mechanism or to the random mechanism. In a previous work, we developed a model for a single actin or microtubule filament where hydrolysis occurred according to the vectorial mechanism: the filament could grow only from one end, and was in contact with a reservoir of monomers. Here we extend this approach in several ways, by including the dynamics of both ends and by comparing two possible mechanisms of ATP hydrolysis . Our emphasis is mainly on two possible limiting models for the mechanism of hydrolysis within a single filament , namely the vectorial or the random model . We propose a set of experiments to test the nature of the precise mechanism of hydrolysis within actin filaments.
[ { "type": "R", "before": ",", "after": "and", "start_char_pos": 551, "end_char_pos": 552 }, { "type": "D", "before": ", and by introducing mass conservation for the monomers", "after": null, "start_char_pos": 608, "end_char_pos": 663 }, { "type": "R", "before": "the role and", "after": "two possible limiting models for the", "start_char_pos": 692, "end_char_pos": 704 }, { "type": "A", "before": null, "after": ", namely the vectorial or the random model", "start_char_pos": 754, "end_char_pos": 754 } ]
[ 0, 225, 465, 665, 756 ]
0908.1082
1
We develop a new theory for pricing call type American options in complete markets which do not necessarily admit an equivalent local martingale measure. This resolve an open question proposed by Fernholz and Karatzas [Stochastic Portfolio Theory: A Survey, Handbook of Numerical Analysis, 15:89-168, 2009].
We solve the problem of pricing and optimal exercise of American call-type options in markets which do not necessarily admit an equivalent local martingale measure. This resolves an open question proposed by Fernholz and Karatzas [Stochastic Portfolio Theory: A Survey, Handbook of Numerical Analysis, 15:89-168, 2009].
[ { "type": "R", "before": "develop a new theory for pricing call type American options in complete", "after": "solve the problem of pricing and optimal exercise of American call-type options in", "start_char_pos": 3, "end_char_pos": 74 }, { "type": "R", "before": "resolve", "after": "resolves", "start_char_pos": 159, "end_char_pos": 166 } ]
[ 0, 153 ]
0908.1199
1
We study a model of microtubule assembly/disassembly in which GTP bound to tubulins within the microtubule undergoes stochastic hydrolysis. In contrast to models that only consider a cap of GTP-bound tubulin, stochastic hydrolysis allows GTP-bound tubulin remnants to exist within the microtubule. We find that these buried GTP remnants enable an alternative rescue mechanism , and enhances fluctuations of filament lengths. Our results also show that in the presence of remnants microtubule dynamics can be regulated by changing the depolymerisation rate .
We study a one-dimensional model of microtubule assembly/disassembly in which GTP bound to tubulins within the microtubule undergoes stochastic hydrolysis. In contrast to models that only consider a cap of GTP-bound tubulin, stochastic hydrolysis allows GTP-bound tubulin remnants to exist within the microtubule. We find that these buried GTP remnants enable an alternative mechanism of recovery from shrinkage , and enhances fluctuations of filament lengths. Under conditions for which this alternative mechanism dominates, an increasing depolymerization rate leads to a decrease in dissociation rate and thus a net increase in assembly .
[ { "type": "A", "before": null, "after": "one-dimensional", "start_char_pos": 11, "end_char_pos": 11 }, { "type": "R", "before": "rescue mechanism", "after": "mechanism of recovery from shrinkage", "start_char_pos": 360, "end_char_pos": 376 }, { "type": "R", "before": "Our results also show that in the presence of remnants microtubule dynamics can be regulated by changing the depolymerisation rate", "after": "Under conditions for which this alternative mechanism dominates, an increasing depolymerization rate leads to a decrease in dissociation rate and thus a net increase in assembly", "start_char_pos": 426, "end_char_pos": 556 } ]
[ 0, 140, 298, 425 ]
0908.1209
1
It has been suggested that microtubules and other cytoskeletal filaments may act as electrical transmission lines. An electrical circuit model of the microtubule is constructed incorporating features of its cylindrical structure with nanopores in its walls and is used to test whether such features might contribute to the proposed role of the microtubule as a mediator in intracellular signaling . Based on the results of Brownian dynamics simulations, the nanopores were found to have asymmetric inner and outer conductances, manifested as nonlinear IV curves. Our simulations indicate that a combination of this asymmetry and an internal voltage source arising from the motion of the C-terminal tails can allow a current to be pumped across the microtubule wall and propagate down the microtubule through the lumen. This current is demonstrated to add directly to the longitudinal current resulting from an external voltage source, and could be significant in amplifying low-intensity endogenous currents within the cellular environment .
It has been suggested that microtubules and other cytoskeletal filaments may act as electrical transmission lines. An electrical circuit model of the microtubule is constructed incorporating features of its cylindrical structure with nanopores in its walls . This model is used to study how ionic conductance along the lumen is affected by flux through the nanopores when an external potential is applied across its two ends . Based on the results of Brownian dynamics simulations, the nanopores were found to have asymmetric inner and outer conductances, manifested as nonlinear IV curves. Our simulations indicate that a combination of this asymmetry and an internal voltage source arising from the motion of the C-terminal tails causes a net current to be pumped across the microtubule wall and propagate down the microtubule through the lumen. This effect is demonstrated to enhance and add directly to the longitudinal current through the lumen resulting from an external voltage source, and could be significant in amplifying low-intensity endogenous currents within the cellular environment or as a nano-bioelectronic device .
[ { "type": "R", "before": "and", "after": ". This model", "start_char_pos": 257, "end_char_pos": 260 }, { "type": "R", "before": "test whether such features might contribute to the proposed role of the microtubule as a mediator in intracellular signaling", "after": "study how ionic conductance along the lumen is affected by flux through the nanopores when an external potential is applied across its two ends", "start_char_pos": 272, "end_char_pos": 396 }, { "type": "R", "before": "can allow a", "after": "causes a net", "start_char_pos": 704, "end_char_pos": 715 }, { "type": "R", "before": "current", "after": "effect", "start_char_pos": 824, "end_char_pos": 831 }, { "type": "A", "before": null, "after": "enhance and", "start_char_pos": 851, "end_char_pos": 851 }, { "type": "A", "before": null, "after": "through the lumen", "start_char_pos": 893, "end_char_pos": 893 }, { "type": "A", "before": null, "after": "or as a nano-bioelectronic device", "start_char_pos": 1042, "end_char_pos": 1042 } ]
[ 0, 114, 562, 818 ]
0908.1555
1
We build a very simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called 'value investing' , i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. When funds use leverage, price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on 'irrational behavior' , such as trend following. We show that the immediate cause of the increase in extreme risks in our model is the risk control policy of the banks : A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy by individual banks makes these extreme fluctuations even worse . Thus it is the very effort to control risk at the local level that creates excessive risk at the aggregate level, which shows up as fat tails and clustered volatility .
We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called "value investing" , i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on "irrational behavior" , such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market : A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse .
[ { "type": "D", "before": "very", "after": null, "start_char_pos": 11, "end_char_pos": 15 }, { "type": "R", "before": "'value investing'", "after": "\"value investing\"", "start_char_pos": 156, "end_char_pos": 173 }, { "type": "R", "before": "When funds use leverage,", "after": "During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases", "start_char_pos": 499, "end_char_pos": 523 }, { "type": "R", "before": "'irrational behavior'", "after": "\"irrational behavior\"", "start_char_pos": 714, "end_char_pos": 735 }, { "type": "R", "before": "We show that the immediate cause of the increase in extreme risks in our model is the risk control policy of the banks", "after": "Here instead this comes from the fact that leverage limits cause funds to sell into a falling market", "start_char_pos": 763, "end_char_pos": 881 }, { "type": "A", "before": null, "after": "large", "start_char_pos": 1203, "end_char_pos": 1203 }, { "type": "R", "before": "by individual banks makes", "after": "in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making", "start_char_pos": 1418, "end_char_pos": 1443 }, { "type": "D", "before": ". Thus it is the very effort to control risk at the local level that creates excessive risk at the aggregate level, which shows up as fat tails and clustered volatility", "after": null, "start_char_pos": 1482, "end_char_pos": 1650 } ]
[ 0, 76, 233, 350, 498, 641, 762, 1055, 1159, 1229, 1354, 1483 ]
0908.1879
1
We study the topological properties of the multi-network of commodity-specific trade relations among world countries over the 1992-2003 period, comparing them with those of the aggregate-trade network, known in the literature as the international trade network (ITN). We show that link-weight distributions of commodity-specific networks are extremely heterogeneous and (quasi) log-normality of aggregate link-weight distribution is generated as a sheer outcome of aggregation. Commodity-specific networks also display average connectivity, clustering and centrality levels very different from their aggregate counterpart. We also find that ITN complete connectivity is mainly achieved through the presence of many weak links that keep commodity-specific networks together , and that the correlation structure existing between topological statistics within each single network is fairly robust and mimics that of the aggregate network. Finally, we employ cross-commodity correlations between link weights to build taxonomies of commodities. Our results suggest that on the top of a relatively time-invariant " intrinsic" taxonomy (based on inherent between-commodity similarities), the roles played by different commodities in the ITN have become more and more dissimilar, possibly as the result of an increased trade specialization .
We study the topological properties of the multinetwork of commodity-specific trade relations among world countries over the 1992-2003 period, comparing them with those of the aggregate-trade network, known in the literature as the international-trade network (ITN). We show that link-weight distributions of commodity-specific networks are extremely heterogeneous and (quasi) log normality of aggregate link-weight distribution is generated as a sheer outcome of aggregation. Commodity-specific networks also display average connectivity, clustering , and centrality levels very different from their aggregate counterpart. We also find that ITN complete connectivity is mainly achieved through the presence of many weak links that keep commodity-specific networks together and that the correlation structure existing between topological statistics within each single network is fairly robust and mimics that of the aggregate network. Finally, we employ cross-commodity correlations between link weights to build hierarchies of commodities. Our results suggest that on the top of a relatively time-invariant `` intrinsic" taxonomy (based on inherent between-commodity similarities), the roles played by different commodities in the ITN have become more and more dissimilar, possibly as the result of an increased trade specialization . Our approach is general and can be used to characterize any multinetwork emerging as a nontrivial aggregation of several interdependent layers .
[ { "type": "R", "before": "multi-network", "after": "multinetwork", "start_char_pos": 43, "end_char_pos": 56 }, { "type": "R", "before": "international trade", "after": "international-trade", "start_char_pos": 233, "end_char_pos": 252 }, { "type": "R", "before": "log-normality", "after": "log normality", "start_char_pos": 378, "end_char_pos": 391 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 552, "end_char_pos": 552 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 774, "end_char_pos": 775 }, { "type": "R", "before": "taxonomies", "after": "hierarchies", "start_char_pos": 1015, "end_char_pos": 1025 }, { "type": "R", "before": "\"", "after": "``", "start_char_pos": 1109, "end_char_pos": 1110 }, { "type": "A", "before": null, "after": ". Our approach is general and can be used to characterize any multinetwork emerging as a nontrivial aggregation of several interdependent layers", "start_char_pos": 1334, "end_char_pos": 1334 } ]
[ 0, 267, 477, 623, 936, 1041 ]
0908.3574
1
Bloom filters are well-known space-efficient data structures that answer set membership queries with some probability of false positives. In an attempt to solve many of the limitations of current inter-networking architectures, some recent proposals rely on including small Bloom filters in packet headers for routing, security, accountability or other purposes that push application states into the packets themselves. In this paper, we consider the design of in-packet Bloom filters (iBF). Our main contribution consists of a series of extensions (1) to increase the practicality and performance of iBFs, (2) to enable false-negative-free element deletion, and (3) to provide security enhancements. An evaluation of the design parameters and implementation alternatives validates the usefulness of the proposed extensions, providing for enhanced and novel iBF networking applications.
The Bloom filter (BF) is a well-known space-efficient data structure that answers set membership queries with some probability of false positives. In an attempt to solve many of the limitations of current inter-networking architectures, some recent proposals rely on including small BFs in packet headers for routing, security, accountability or other purposes that move application states into the packets themselves. In this paper, we consider the design of such in-packet Bloom filters (iBF). Our main contributions are exploring the design space and the evaluation of a series of extensions (1) to increase the practicality and performance of iBFs, (2) to enable false-negative-free element deletion, and (3) to provide security enhancements. In addition to the theoretical estimates, extensive simulations of the multiple design parameters and implementation alternatives validate the usefulness of the extensions, providing for enhanced and novel iBF networking applications.
[ { "type": "R", "before": "Bloom filters are", "after": "The Bloom filter (BF) is a", "start_char_pos": 0, "end_char_pos": 17 }, { "type": "R", "before": "structures that answer", "after": "structure that answers", "start_char_pos": 50, "end_char_pos": 72 }, { "type": "R", "before": "Bloom filters", "after": "BFs", "start_char_pos": 274, "end_char_pos": 287 }, { "type": "R", "before": "push", "after": "move", "start_char_pos": 367, "end_char_pos": 371 }, { "type": "A", "before": null, "after": "such", "start_char_pos": 461, "end_char_pos": 461 }, { "type": "R", "before": "contribution consists", "after": "contributions are exploring the design space and the evaluation", "start_char_pos": 502, "end_char_pos": 523 }, { "type": "R", "before": "An evaluation of the", "after": "In addition to the theoretical estimates, extensive simulations of the multiple", "start_char_pos": 702, "end_char_pos": 722 }, { "type": "R", "before": "validates", "after": "validate", "start_char_pos": 773, "end_char_pos": 782 }, { "type": "D", "before": "proposed", "after": null, "start_char_pos": 805, "end_char_pos": 813 } ]
[ 0, 137, 419, 492, 701 ]
0909.0065
1
We study Atlas-type models of equity markets with local characteristics that depend on both name and rank, and in ways that induce a stability of the capital distribution. Ergodic properties and rankings of processes are examined with reference to the theory of reflected Brownian motions in polyhedral domains. In the context of such models , we discuss properties of various investment strategies, including the so-called growth-optimal and universal portfolios.
We study Atlas-type models of equity markets with local characteristics that depend on both name and rank, and in ways that induce a stable capital distribution. Ergodic properties and rankings of processes are examined with reference to the theory of reflected Brownian motions in polyhedral domains. In the context of such models we discuss properties of various investment strategies, including the so-called growth-optimal and universal portfolios.
[ { "type": "R", "before": "stability of the", "after": "stable", "start_char_pos": 133, "end_char_pos": 149 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 342, "end_char_pos": 343 } ]
[ 0, 171, 311 ]
0909.1411
1
We study the stochastic oscillations in protein number dynamics of the three-component negative feedback transcription regulatory system called the repressilator . We quantify the degree of fluctuations in oscillation periods and amplitudes, as well as the noise propagation along the regulatory cascade using the exact stochastic simulations with the Gillespie algorithm in the stable oscillation regime . For the single protein species level, the fluctuation in the oscillation amplitudes is found to be larger than that of oscillation periods, the distributions of which are reasonably described by the Weibull distribution and the Gaussian tail, respectively. Correlations between successive periods and between successive amplitudes respectively are measured to assess the noise propagation properties, which are found to decay faster for the amplitude than for the period. Local fluctuation property is also studied.
We study the noise characteristics of stochastic oscillations in protein number dynamics of simple genetic oscillatory systems. Using the three-component negative feedback transcription regulatory system called the repressilator as a prototypical example, we quantify the degree of fluctuations in oscillation periods and amplitudes, as well as the noise propagation along the regulatory cascade in the stable oscillation regime via dynamic Monte Carlo simulations . For the single protein-species level, the fluctuation in the oscillation amplitudes is found to be larger than that of the oscillation periods, the distributions of which are reasonably described by the Weibull distribution and the Gaussian tail, respectively. Correlations between successive periods and between successive amplitudes , respectively, are measured to assess the noise propagation properties, which are found to decay faster for the amplitude than for the period. The local fluctuation property is also studied.
[ { "type": "A", "before": null, "after": "noise characteristics of", "start_char_pos": 13, "end_char_pos": 13 }, { "type": "A", "before": null, "after": "simple genetic oscillatory systems. Using", "start_char_pos": 68, "end_char_pos": 68 }, { "type": "R", "before": ". We", "after": "as a prototypical example, we", "start_char_pos": 164, "end_char_pos": 168 }, { "type": "D", "before": "using the exact stochastic simulations with the Gillespie algorithm", "after": null, "start_char_pos": 306, "end_char_pos": 373 }, { "type": "A", "before": null, "after": "via dynamic Monte Carlo simulations", "start_char_pos": 407, "end_char_pos": 407 }, { "type": "R", "before": "protein species", "after": "protein-species", "start_char_pos": 425, "end_char_pos": 440 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 529, "end_char_pos": 529 }, { "type": "R", "before": "respectively", "after": ", respectively,", "start_char_pos": 742, "end_char_pos": 754 }, { "type": "R", "before": "Local", "after": "The local", "start_char_pos": 883, "end_char_pos": 888 } ]
[ 0, 165, 409, 667, 882 ]