doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1005.1357
1
This paper works out fair values of stock loan model with automatic termination clause . This stock loan is treated as a generalized perpetual American option with an automatic termination clause and possibly negative interest rate . Since it helps a bank to control the risk, banks should charge less service fees compared to stock loans without automatic termination clauses . The automatic termination clause is in fact a stop order set by the bank. We aim at establishing explicitly the value of such a loan and ranges of fair values of key parameters : this loan size, interest rate, fee for providing such a service and quantity of this automatic termination clause and relationships among these parameters as well as the optimal terminable stopping times .
This paper works out fair values of stock loan model with automatic termination clause , cap and margin . This stock loan is treated as a generalized perpetual American option with possibly negative interest rate and some constraints . Since it helps a bank to control the risk, the banks charge less service fees compared to stock loans without any constraints . The automatic termination clause , cap and margin are in fact a stop order set by the bank. Mathematically, it is a kind of optimal stopping problems arising from the pricing of financial products which is first revealed. We aim at establishing explicitly the value of such a loan and ranges of fair values of key parameters : this loan size, interest rate, cap, margin and fee for providing such a service and quantity of this automatic termination clause and relationships among these parameters as well as the optimal exercise times. We present numerical results and make analysis about the model parameters and how they impact on value of stock loan .
[ { "type": "A", "before": null, "after": ", cap and margin", "start_char_pos": 87, "end_char_pos": 87 }, { "type": "D", "before": "an automatic termination clause and", "after": null, "start_char_pos": 165, "end_char_pos": 200 }, { "type": "A", "before": null, "after": "and some constraints", "start_char_pos": 233, "end_char_pos": 233 }, { "type": "R", "before": "banks should", "after": "the banks", "start_char_pos": 279, "end_char_pos": 291 }, { "type": "R", "before": "automatic termination clauses", "after": "any constraints", "start_char_pos": 349, "end_char_pos": 378 }, { "type": "R", "before": "is", "after": ", cap and margin are", "start_char_pos": 414, "end_char_pos": 416 }, { "type": "A", "before": null, "after": "Mathematically, it is a kind of optimal stopping problems arising from the pricing of financial products which is first revealed.", "start_char_pos": 455, "end_char_pos": 455 }, { "type": "A", "before": null, "after": "cap, margin and", "start_char_pos": 592, "end_char_pos": 592 }, { "type": "R", "before": "terminable stopping times", "after": "exercise times. We present numerical results and make analysis about the model parameters and how they impact on value of stock loan", "start_char_pos": 740, "end_char_pos": 765 } ]
[ 0, 89, 235, 380, 454 ]
1005.1360
1
This paper considers optimal control problem of a large insurance company under higher standard of solvency . The company controls proportional reinsurance rate, dividend pay-outs and investing process to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. This paper aims at describing the optimal return function as well as the optimal policy .
This paper considers optimal control problem of a large insurance company under a fixed insolvency probability . The company controls proportional reinsurance rate, dividend pay-outs and investing process to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. This paper aims at describing the optimal return function as well as the optimal policy . As a by-product, the paper theoretically sets a risk-based capital standard to ensure the capital requirement of can cover the total risk .
[ { "type": "R", "before": "higher standard of solvency", "after": "a fixed insolvency probability", "start_char_pos": 80, "end_char_pos": 107 }, { "type": "A", "before": null, "after": ". As a by-product, the paper theoretically sets a risk-based capital standard to ensure the capital requirement of can cover the total risk", "start_char_pos": 384, "end_char_pos": 384 } ]
[ 0, 109, 295 ]
1005.1361
1
This paper considers nonlinear optimal stochastic control of insurance company with proportional reinsurance policy under small bankrupt probability constraints . The company controls the reinsurance rate and dividend payout process to maximize the expected present value of the dividends until the time of bankruptcy. However, if the optimal dividend barrier is too low to be acceptable, it will make the company result in bankruptcy soon. In addition , although risk and return should be highly correlated, over-risking is not a good recipe for high return . Therefore, the managements of the company have to impose their preferred risk level and additional charge on firm seeking services beyond or lower than the level. These turn out to be nonlinear regular-singular stochastic optimal problems under small bankrupt probability constraints. This paper aims at solving this kind of the optimal problems, that is, working out the optimal control policy of the insurance company , in particular, exact minimum dividend barrier, and finding the optimal return function associated with the optimal policy .
This paper considers nonlinear regular-singular stochastic optimal control of large insurance company . The company controls the reinsurance rate and dividend payout process to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. However, if the optimal dividend barrier is too low to be acceptable, it will make the company result in bankruptcy soon. Moreover , although risk and return should be highly correlated, over-risking is not a good recipe for high return , the supervisors of the company have to impose their preferred risk level and additional charge on firm seeking services beyond or lower than the preferred risk level. These indeed are nonlinear regular-singular stochastic optimal problems under insolvency probability constraints. This paper aims at solving this kind of the optimal problems, that is, deriving the optimal retention ratio,dividend payout level, optimal return function and optimal control policy of the insurance company . As a by-product, the paper also sets a risk-based capital standard to ensure the capital requirement of can cover the total given risk, and the effect of the risk level on optimal retention ratio, dividend payout level and optimal control policy are also presented .
[ { "type": "R", "before": "optimal stochastic control of insurance company with proportional reinsurance policy under small bankrupt probability constraints", "after": "regular-singular stochastic optimal control of large insurance company", "start_char_pos": 31, "end_char_pos": 160 }, { "type": "R", "before": "dividends", "after": "dividend pay-outs", "start_char_pos": 279, "end_char_pos": 288 }, { "type": "R", "before": "In addition", "after": "Moreover", "start_char_pos": 441, "end_char_pos": 452 }, { "type": "R", "before": ". Therefore, the managements", "after": ", the supervisors", "start_char_pos": 559, "end_char_pos": 587 }, { "type": "A", "before": null, "after": "preferred risk", "start_char_pos": 717, "end_char_pos": 717 }, { "type": "R", "before": "turn out to be", "after": "indeed are", "start_char_pos": 731, "end_char_pos": 745 }, { "type": "R", "before": "small bankrupt", "after": "insolvency", "start_char_pos": 807, "end_char_pos": 821 }, { "type": "R", "before": "working out the optimal", "after": "deriving the optimal retention ratio,dividend payout level, optimal return function and optimal", "start_char_pos": 918, "end_char_pos": 941 }, { "type": "R", "before": ", in particular, exact minimum dividend barrier, and finding the optimal return function associated with the optimal policy", "after": ". As a by-product, the paper also sets a risk-based capital standard to ensure the capital requirement of can cover the total given risk, and the effect of the risk level on optimal retention ratio, dividend payout level and optimal control policy are also presented", "start_char_pos": 982, "end_char_pos": 1105 } ]
[ 0, 162, 318, 440, 560, 724, 846 ]
1005.1361
2
This paper considers nonlinear regular-singular stochastic optimal control of large insurance company. The company controls the reinsurance rate and dividend payout process to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. However, if the optimal dividend barrier is too low to be acceptable, it will make the company result in bankruptcy soon. Moreover, although risk and return should be highly correlated, over-risking is not a good recipe for high return, the supervisors of the company have to impose their preferred risk level and additional charge on firm seeking services beyond or lower than the preferred risk level. These indeed are nonlinear regular-singular stochastic optimal problems under insolvency probability constraints. This paper aims at solving this kind of the optimal problems, that is, deriving the optimal retention ratio,dividend payout level, optimal return function and optimal control policy of the insurance company. As a by-product, the paper also sets a risk-based capital standard to ensure the capital requirement of can cover the total given risk, and the effect of the risk level on optimal retention ratio, dividend payout level and optimal control policy are also presented.
This paper considers nonlinear regular-singular stochastic optimal control of large insurance company. The company controls the reinsurance rate and dividend payout process to maximize the expected present value of the dividend pay-outs until the time of bankruptcy. However, if the optimal dividend barrier is too low to be acceptable, it will make the company result in bankruptcy soon. Moreover, although risk and return should be highly correlated, over-risking is not a good recipe for high return, the supervisors of the company have to impose their preferred risk level and additional charge on firm seeking services beyond or lower than the preferred risk level. These indeed are nonlinear regular-singular stochastic optimal problems under ruin probability constraints. This paper aims at solving this kind of the optimal problems, that is, deriving the optimal retention ratio,dividend payout level, optimal return function and optimal control strategy of the insurance company. As a by-product, the paper also sets a risk-based capital standard to ensure the capital requirement of can cover the total given risk, and the effect of the risk level on optimal retention ratio, dividend payout level and optimal control strategy are also presented.
[ { "type": "R", "before": "insolvency", "after": "ruin", "start_char_pos": 749, "end_char_pos": 759 }, { "type": "R", "before": "policy", "after": "strategy", "start_char_pos": 960, "end_char_pos": 966 }, { "type": "R", "before": "policy", "after": "strategy", "start_char_pos": 1232, "end_char_pos": 1238 } ]
[ 0, 102, 266, 388, 670, 784, 992 ]
1005.1476
1
We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, Cram\'er-von-Mises Minimum Distance estimators, and , as a special case of quantile-based estimators, Pickands Estimator. We further consider an estimator matching the population median and an asymmetric, robust estimator of scale (kMAD) to the empirical ones (kMedMAD), which may be tuned to an expected FSBP of 34\%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e.; the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimatorminimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the finite sample breakdown point, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error - all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study .
We study robustness properties of several procedures for joint estimation of shape and scale in a generalized Pareto model. The estimators we primarily focus on, MBRE and OMSE, are one-step estimators distinguished as optimally-robust in the shrinking neighborhood setting, i.e.; they minimize the maximal bias, respectively, on a specific such neighborhood, the maximal mean squared error. For their initialization, we propose a particular Location-Dispersion (LD) estimator, kMedMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally-robust estimators are compared to maximum likelihood, skipped maximum likelihood, Cramer-von-Mises minimum distance, method of median, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite sample breakdown point, the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE - all evaluated uniformly on shrinking neighborhoods. These asymptotic findings are complemented by an extensive simulation study to assess their finite sample behavior .
[ { "type": "D", "before": "global and local", "after": null, "start_char_pos": 9, "end_char_pos": 25 }, { "type": "R", "before": "estimators for", "after": "procedures for joint estimation of", "start_char_pos": 59, "end_char_pos": 73 }, { "type": "R", "before": "considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, Cram\\'er-von-Mises Minimum Distance estimators, and , as a special case of quantile-based estimators, Pickands Estimator. We further consider an estimator matching the population median and an asymmetric, robust estimator of scale (kMAD) to the empirical ones (kMedMAD), which may be tuned to an expected FSBP of 34\\%. These estimators are compared to", "after": "we primarily focus on, MBRE and OMSE, are", "start_char_pos": 136, "end_char_pos": 588 }, { "type": "R", "before": "optimal", "after": "optimally-robust", "start_char_pos": 626, "end_char_pos": 633 }, { "type": "R", "before": "the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimatorminimizing the maximal (asymptotic) MSE. For", "after": "they minimize the maximal bias, respectively, on a specific such neighborhood, the maximal mean squared error. For their initialization, we propose a particular Location-Dispersion (LD) estimator, kMedMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally-robust estimators are compared to maximum likelihood, skipped maximum likelihood, Cramer-von-Mises minimum distance, method of median, and Pickands estimators. To quantify their deviation from robust optimality, for", "start_char_pos": 679, "end_char_pos": 812 }, { "type": "A", "before": null, "after": "suboptimal", "start_char_pos": 827, "end_char_pos": 827 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 923, "end_char_pos": 923 }, { "type": "R", "before": "mean squared error", "after": "MSE", "start_char_pos": 988, "end_char_pos": 1006 }, { "type": "R", "before": "convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior", "after": "neighborhoods. These asymptotic findings are complemented", "start_char_pos": 1046, "end_char_pos": 1168 }, { "type": "A", "before": null, "after": "to assess their finite sample behavior", "start_char_pos": 1202, "end_char_pos": 1202 } ]
[ 0, 120, 358, 555, 678, 1081 ]
1005.1476
2
We study robustness properties of several procedures for joint estimation of shape and scale in a generalized Pareto model. The estimators we primarily focus on, MBRE and OMSE, are one-step estimators distinguished as optimally-robust in the shrinking neighborhood setting, i. e.; they minimize the maximal bias, respectively, on a specific such neighborhood, the maximal mean squared error. For their initialization, we propose a particular Location-Dispersion (LD) estimator, kMedMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations ) against the empirical counterparts. These optimally-robust estimators are compared to maximum likelihood, skipped maximum likelihood, Cramer-von-Mises minimum distance, method of median, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite sample breakdown point, the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE - all evaluated uniformly on shrinking neighborhoods. These asymptotic findings are complemented by an extensive simulation study to assess their finite sample behavior .
This paper deals with optimally-robust parameter estimation in generalized Pareto distributions (GPDs). These arise naturally in many situations where one is interested in the behavior of extreme events as motivated by the Pickands-Balkema-de Haan extreme value theorem (PBHT). The application we have in mind is calculation of the regulatory capital required by Basel II for a bank to cover operational risk. In this context the tail behavior of the underlying distribution is crucial. This is where extreme value theory enters, suggesting to estimate these high quantiles parameterically using, e.g. GPDs. Robust statistics in this context offers procedures bounding the influence of single observations, so provides reliable inference in the presence of moderate deviations from the distributional model assumptions, respectively from the mechanisms underlying the PBHT .
[ { "type": "R", "before": "We study robustness properties of several procedures for joint estimation of shape and scale in a generalized Pareto model. The estimators we primarily focus on, MBRE and OMSE, are one-step estimators distinguished as optimally-robust in the shrinking neighborhood setting, i. e.; they minimize the maximal bias, respectively, on a specific such neighborhood, the maximal mean squared error. For their initialization, we propose a particular Location-Dispersion (LD) estimator, kMedMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations ) against the empirical counterparts. These", "after": "This paper deals with", "start_char_pos": 0, "end_char_pos": 635 }, { "type": "R", "before": "estimators are compared to maximum likelihood, skipped maximum likelihood, Cramer-von-Mises minimum distance, method of median, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite sample breakdown point, the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE - all evaluated uniformly on shrinking neighborhoods. These asymptotic findings are complemented by an extensive simulation study to assess their finite sample behavior", "after": "parameter estimation in generalized Pareto distributions (GPDs). These arise naturally in many situations where one is interested in the behavior of extreme events as motivated by the Pickands-Balkema-de Haan extreme value theorem (PBHT). The application we have in mind is calculation of the regulatory capital required by Basel II for a bank to cover operational risk. In this context the tail behavior of the underlying distribution is crucial. This is where extreme value theory enters, suggesting to estimate these high quantiles parameterically using, e.g. GPDs. Robust statistics in this context offers procedures bounding the influence of single observations, so provides reliable inference in the presence of moderate deviations from the distributional model assumptions, respectively from the mechanisms underlying the PBHT", "start_char_pos": 653, "end_char_pos": 1222 } ]
[ 0, 123, 280, 391, 629, 805, 1107 ]
1005.1862
1
We consider the estimation of integrated covariance matrices of high dimensional diffusion processes by using high frequency data . We start by studying the most commonly used estimator, the realized covariance matrix {\it (RCV) . We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting empirical spectral distribution of RCV depends on the covolatility processes not only through the underlying integrated covariance matrixSigma , but also on how the covolatility processes vary in time. In particular, for two high dimensional diffusion processes with the same integrated covariance matrix, the empirical spectral distributions of their RCVs can be very different. Hence in terms of making inference about the spectrum of the integrated covariance matrix, the RCV is in generalnot a good proxy to rely on in the high dimensional caseenko-Pastur type theorem for weighted sample covariance matrices, based on which we further establish a Mar\v{c}enko-Pastur type theorem for RCV matrices for a class \mathcal{C} of diffusion processes. The results explicitly demonstrate how the time-variability of the covolatility process affects the LSD of RCV matrix} . We then propose an alternative estimator, the {\it time-variation adjusted realized covariance matrix (TVARCV) , for a class of diffusion processes . We show that the limiting empirical spectral distribution of our proposed estimator TVARCV does depend solely on that of Sigma through a Marcenko-Pastur equation, and hence the TVARCV can be used to recover the empirical spectral distribution of Sigma by inverting the Marcenko-Pastur equation, which can then be applied to further applications such as portfolio allocation, risk management, etc. .
We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations . We start by studying the most commonly used estimator, the {\it realized covariance (RCV) matrix . We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting spectral distribution (LSD) of the RCV matrix depends on the covolatility process not only through the targeting ICV matrix , but also on how the covolatility process varies in time. We establish a Mar\v{cenko-Pastur type theorem for weighted sample covariance matrices, based on which we further establish a Mar\v{c}enko-Pastur type theorem for RCV matrices for a class \mathcal{C} of diffusion processes. The results explicitly demonstrate how the time-variability of the covolatility process affects the LSD of RCV matrix} . We then propose an alternative estimator, the {\it time-variation adjusted realized covariance (TVARCV) matrix . We show that for diffusion processes in class \mathcal{C solely on that of the targeting ICV matrix through a Mar\v{c .
[ { "type": "A", "before": null, "after": "(ICV)", "start_char_pos": 52, "end_char_pos": 52 }, { "type": "R", "before": "by using high frequency data", "after": "based on high frequency observations", "start_char_pos": 102, "end_char_pos": 130 }, { "type": "D", "before": "realized covariance matrix", "after": null, "start_char_pos": 192, "end_char_pos": 218 }, { "type": "A", "before": null, "after": "realized covariance", "start_char_pos": 224, "end_char_pos": 224 }, { "type": "A", "before": null, "after": "matrix", "start_char_pos": 231, "end_char_pos": 231 }, { "type": "R", "before": "empirical spectral distribution of RCV", "after": "spectral distribution (LSD) of the RCV matrix", "start_char_pos": 365, "end_char_pos": 403 }, { "type": "R", "before": "processes", "after": "process", "start_char_pos": 432, "end_char_pos": 441 }, { "type": "R", "before": "underlying integrated covariance matrixSigma", "after": "targeting ICV matrix", "start_char_pos": 463, "end_char_pos": 507 }, { "type": "R", "before": "processes vary", "after": "process varies", "start_char_pos": 543, "end_char_pos": 557 }, { "type": "D", "before": "In particular, for two high dimensional diffusion processes with the same integrated covariance matrix, the empirical spectral distributions of their RCVs can be very different. Hence in terms of making inference about the spectrum of the integrated covariance matrix, the RCV is in general", "after": null, "start_char_pos": 567, "end_char_pos": 857 }, { "type": "D", "before": "not", "after": null, "start_char_pos": 857, "end_char_pos": 860 }, { "type": "R", "before": "a good proxy to rely on in the high dimensional case", "after": "We establish a Mar\\v{c", "start_char_pos": 861, "end_char_pos": 913 }, { "type": "D", "before": "matrix", "after": null, "start_char_pos": 1331, "end_char_pos": 1337 }, { "type": "R", "before": ", for a class of diffusion processes", "after": "matrix", "start_char_pos": 1347, "end_char_pos": 1383 }, { "type": "R", "before": "the limiting empirical spectral distribution of our proposed estimator TVARCV does depend", "after": "for diffusion processes in class \\mathcal{C", "start_char_pos": 1399, "end_char_pos": 1488 }, { "type": "R", "before": "Sigma through a Marcenko-Pastur equation, and hence the TVARCV can be used to recover the empirical spectral distribution of Sigma by inverting the Marcenko-Pastur equation, which can then be applied to further applications such as portfolio allocation, risk management, etc.", "after": "the targeting ICV matrix through a Mar\\v{c", "start_char_pos": 1507, "end_char_pos": 1782 } ]
[ 0, 132, 566, 744, 1114, 1235, 1385 ]
1005.1862
3
We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations. We start by studying the most commonly used estimator, the %DIFDELCMD < {\it %%% realized covariance (RCV) matrix. We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting spectral distribution (LSD) of RCV depends on the covolatility process not only through the targeting ICV ,but also on how the covolatility process varies in time . We establish a Marc enko-Pastur type theorem for weighted sample covariance matrices, based on which we obtain a Marc enko-Pastur type theorem for RCV for a class %DIFDELCMD < \sC %%% of diffusion processes. The results explicitly demonstrate how the time variability of the covolatility process affects the LSD of RCV. We further propose an alternative estimator, the %DIFDELCMD < {\it %%% time-variation adjusted realized covariance (TVARCV) matrix. We show that for processes in class %DIFDELCMD < \sC%%% , the TVARCV possesses the desirable property that its LSD depends solely on that of the targeting ICV through the Marc enko-Pastur equation, and hence, in particular, the TVARCV can be used to recover the empirical spectral distribution of the ICV by using existing algorithms.
We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations. We start by studying the most commonly used estimator, the %DIFDELCMD < {\it %%% realized covariance (RCV) matrix. We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting spectral distribution (LSD) of RCV depends on the covolatility process not only through the targeting ICV, but also on how the covolatility process varies in time . We establish a Marc enko--Pastur type theorem for weighted sample covariance matrices, based on which we obtain a Marc enko--Pastur type theorem for RCV for a class %DIFDELCMD < \sC %%% \mathcal{C of diffusion processes. The results explicitly demonstrate how the time variability of the covolatility process affects the LSD of RCV. We further propose an alternative estimator, the %DIFDELCMD < {\it %%% time-variation adjusted realized covariance (TVARCV) matrix. We show that for processes in class %DIFDELCMD < \sC%%% \mathcal C , the TVARCV possesses the desirable property that its LSD depends solely on that of the targeting ICV through the Marc enko--Pastur equation, and hence, in particular, the TVARCV can be used to recover the empirical spectral distribution of the ICV by using existing algorithms.
[ { "type": "R", "before": "realized covariance", "after": "realized covariance", "start_char_pos": 226, "end_char_pos": 245 }, { "type": "D", "before": "not only through the targeting ICV", "after": null, "start_char_pos": 462, "end_char_pos": 496 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 497, "end_char_pos": 498 }, { "type": "R", "before": "but also on how the covolatility process varies in time", "after": "not only through the targeting ICV, but also on how the covolatility process varies in time", "start_char_pos": 498, "end_char_pos": 553 }, { "type": "R", "before": "enko-Pastur", "after": "enko--Pastur", "start_char_pos": 576, "end_char_pos": 587 }, { "type": "R", "before": "enko-Pastur", "after": "enko--Pastur", "start_char_pos": 674, "end_char_pos": 685 }, { "type": "A", "before": null, "after": "\\mathcal{C", "start_char_pos": 740, "end_char_pos": 740 }, { "type": "A", "before": null, "after": "\\mathcal", "start_char_pos": 1065, "end_char_pos": 1065 }, { "type": "A", "before": null, "after": "C", "start_char_pos": 1066, "end_char_pos": 1066 }, { "type": "R", "before": "enko-Pastur", "after": "enko--Pastur", "start_char_pos": 1187, "end_char_pos": 1198 } ]
[ 0, 144, 259, 555, 764, 876, 1008 ]
1005.2581
1
CUDA and OpenCL offer two different interfaces for programming GPUs . OpenCL is an open standard that can be used to program CPUs, GPUs, and other devices from different vendors, while CUDA is specific to NVIDIA GPUs. Although OpenCL promises a portable language for GPU programming, its generality may entail a performance penalty. In this paper, we compare the performance of CUDA and OpenCL using complex, near-identical kernels . We show that when using NVIDIA compiler tools, converting a CUDA kernel to an OpenCL kernel involves minimal modifications. Making such a kernel compile with ATI's build tools involves more modifications. Our performance tests measure and compare data transfer times to and from the GPU, kernel execution times, and end-to-end application execution times for both CUDA and OpenCL.
CUDA and OpenCL are two different frameworks for GPU programming . OpenCL is an open standard that can be used to program CPUs, GPUs, and other devices from different vendors, while CUDA is specific to NVIDIA GPUs. Although OpenCL promises a portable language for GPU programming, its generality may entail a performance penalty. In this paper, we use complex, near-identical kernels from a Quantum Monte Carlo application to compare the performance of CUDA and OpenCL . We show that when using NVIDIA compiler tools, converting a CUDA kernel to an OpenCL kernel involves minimal modifications. Making such a kernel compile with ATI's build tools involves more modifications. Our performance tests measure and compare data transfer times to and from the GPU, kernel execution times, and end-to-end application execution times for both CUDA and OpenCL.
[ { "type": "R", "before": "offer two different interfaces for programming GPUs", "after": "are two different frameworks for GPU programming", "start_char_pos": 16, "end_char_pos": 67 }, { "type": "A", "before": null, "after": "use complex, near-identical kernels from a Quantum Monte Carlo application to", "start_char_pos": 351, "end_char_pos": 351 }, { "type": "D", "before": "using complex, near-identical kernels", "after": null, "start_char_pos": 395, "end_char_pos": 432 } ]
[ 0, 217, 332, 434, 558, 639 ]
1005.3454
1
This paper addresses the question of how to invest in an extremely robust growth-optimal way in a market where the instantaneous expected return of the underlying process is unknown. The optimal investment strategy is identified using a generalized version of the principle eigenfunction for an elliptic second-order differential operator which depends on the covariance structure of the underlying process used for investing. The aforementioned robust growth-optimal strategy can also be seen as a limit, as the terminal date does to infinity, of optimal arbitrages in the terminology of Fernholz and Karatzas.
This paper addresses the question of how to invest in a robust growth-optimal way in a market where the instantaneous expected return of the underlying process is unknown. The optimal investment strategy is identified using a generalized version of the principal eigenfunction for an elliptic second-order differential operator which depends on the covariance structure of the underlying process used for investing. The robust growth-optimal strategy can also be seen as a limit, as the terminal date goes to infinity, of optimal arbitrages in the terminology of Fernholz and Karatzas.
[ { "type": "R", "before": "an extremely", "after": "a", "start_char_pos": 54, "end_char_pos": 66 }, { "type": "R", "before": "principle", "after": "principal", "start_char_pos": 264, "end_char_pos": 273 }, { "type": "D", "before": "aforementioned", "after": null, "start_char_pos": 431, "end_char_pos": 445 }, { "type": "R", "before": "does", "after": "goes", "start_char_pos": 527, "end_char_pos": 531 } ]
[ 0, 182, 426 ]
1005.3454
2
This paper addresses the question of how to invest in a robust growth-optimal way in a market where the instantaneous expected return of the underlying process is unknown. The optimal investment strategy is identified using a generalized version of the principal eigenfunction for an elliptic second-order differential operator which depends on the covariance structure of the underlying process used for investing. The robust growth-optimal strategy can also be seen as a limit, as the terminal date goes to infinity, of optimal arbitrages in the terminology of Fernholz and Karatzas .
This paper addresses the question of how to invest in a robust growth-optimal way in a market where the instantaneous expected return of the underlying process is unknown. The optimal investment strategy is identified using a generalized version of the principal eigenfunction for an elliptic second-order differential operator , which depends on the covariance structure of the underlying process used for investing. The robust growth-optimal strategy can also be seen as a limit, as the terminal date goes to infinity, of optimal arbitrages in the terminology of Fernholz and Karatzas Ann. Appl. Probab. 20 (2010) 1179-1204 .
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 328, "end_char_pos": 328 }, { "type": "A", "before": null, "after": "Ann. Appl. Probab. 20 (2010) 1179-1204", "start_char_pos": 586, "end_char_pos": 586 } ]
[ 0, 171, 416 ]
1005.3565
1
In this paper, we analyze a real-valued reflected backward stochastic differential equation (RBSDE) with an unbounded obstacle and an unbounded terminal condition when its generator f has quadratic growth in the z variable . In particular, we obtain existence, comparison, and stability results. Moreover, we study the link between the reflected forward backward SDEs and the obstacle problem for semi-linear PDEs in which the non-linearity appears as the square of the gradient. Finally, we prove a comparison theorem for these obstacle problems when the generator is convex or concave on the z variable .
In this paper, we analyze a real-valued reflected backward stochastic differential equation (RBSDE) with an unbounded obstacle and an unbounded terminal condition when its generator f has quadratic growth in the z-variable . In particular, we obtain existence, comparison, and stability results. Moreover, we study the link between the reflected forward backward SDEs and the obstacle problem for semi-linear parabolic PDEs in which the non-linearity appears as the square of the gradient. Finally, we prove a comparison theorem for these obstacle problems when the generator is convex or concave in the z-variable .
[ { "type": "R", "before": "z variable", "after": "z-variable", "start_char_pos": 212, "end_char_pos": 222 }, { "type": "A", "before": null, "after": "parabolic", "start_char_pos": 409, "end_char_pos": 409 }, { "type": "R", "before": "on the z variable", "after": "in the z-variable", "start_char_pos": 588, "end_char_pos": 605 } ]
[ 0, 224, 295, 480 ]
1005.3565
2
In this paper, we analyze a real-valued reflected backward stochastic differential equation (RBSDE) with an unbounded obstacle and an unbounded terminal condition when its generator f has quadratic growth in the z-variable. In particular, we obtain existence, comparison, and stability results . Moreover, we study the link between the reflected forward backward SDEs and the obstacle problem for semi-linear parabolic PDEs in which the non-linearity appears as the square of the gradient. Finally, we prove a comparison theorem for these obstacle problems when the generator is convex or concave in the z-variable.
In this paper, we analyze a real-valued reflected backward stochastic differential equation (RBSDE) with an unbounded obstacle and an unbounded terminal condition when its generator f has quadratic growth in the z-variable. In particular, we obtain existence, comparison, and stability results , and consider the optimal stopping for quadratic g-evaluations. As an application of our results we analyze the obstacle problem for semi-linear parabolic PDEs in which the non-linearity appears as the square of the gradient. Finally, we prove a comparison theorem for these obstacle problems when the generator is convex or concave in the z-variable.
[ { "type": "R", "before": ". Moreover, we study the link between the reflected forward backward SDEs and", "after": ", and consider the optimal stopping for quadratic g-evaluations. As an application of our results we analyze", "start_char_pos": 294, "end_char_pos": 371 } ]
[ 0, 223, 295, 489 ]
1005.3610
1
We present a Monte Carlo algorithm that allows efficient and unbiased sampling of polymer melts consisting of two chains of equal length that jointly visit all the sites of a cubic lattice . Using this algorithm , we show that in the limit of a large lattice the two chains phase separate , in contradiction with the ideal chain behaviour predicted by the Flory theorem .
We present a Monte Carlo algorithm that provides efficient and unbiased sampling of polymer melts consisting of two chains of equal length that jointly visit all the sites of a cubic lattice with rod geometry L x L x rL and non-periodic (hard wall) boundary conditions . Using this algorithm for chains of length up to 40 000 monomers and aspect ratios 1 <= r <= 10 , we show that in the limit of a large lattice the two chains phase separate . This demixing phenomenon is present already for r=1, and becomes more pronounced, albeit not perfect, as r is increased .
[ { "type": "R", "before": "allows", "after": "provides", "start_char_pos": 40, "end_char_pos": 46 }, { "type": "A", "before": null, "after": "with rod geometry L x L x rL and non-periodic (hard wall) boundary conditions", "start_char_pos": 189, "end_char_pos": 189 }, { "type": "A", "before": null, "after": "for chains of length up to 40 000 monomers and aspect ratios 1 <= r <= 10", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": ", in contradiction with the ideal chain behaviour predicted by the Flory theorem", "after": ". This demixing phenomenon is present already for r=1, and becomes more pronounced, albeit not perfect, as r is increased", "start_char_pos": 291, "end_char_pos": 371 } ]
[ 0, 191 ]
1005.4417
1
In this paper we investigate novel applications of a new class of equations which we call time-delayed backward stochastic differential equations. We show that many pricing and hedging problems concerning structured products, participating products or variable annuities can be handled by this equations. Time-delayed BSDEs may appear when we want to find a strategy and a portfolio which should replicate the liability whose pay-off depends on the applied investment strategy or the values of the portfolio. This is usually the case for investment funds or life insurance investment contracts which have bonus distribution mechanisms or provide protection against low returns . We consider some life insurance products, derive the corresponding time-delayed BSDEs and solve them explicitly or at least provide hints how to solve them numerically. We investigate perfect hedging and quadratic hedging which is crucial for insurance applications . We study consequences and give an economic interpretation of the fact that a time-delay BSDE may not have a solution or may have multiple solutions.
In this paper we investigate novel applications of a new class of equations which we call time-delayed backward stochastic differential equations. Time-delayed BSDEs may arise when we want to find a strategy and a portfolio which should replicate the liability whose pay-off depends on the applied investment strategy or the past values of the portfolio. In our setting, an investment portfolio serves simultaneously as the underlying security on which the liability is contingent and as a hedge portfolio for the liability. This is usually the case for capital protected funds and performance linked pay-offs . We consider pay-offs arising under participating contracts, variable annuities, structured products and hedge funds with hurdle rates and high water marks. We derive the corresponding time-delayed BSDEs and solve them explicitly or at least provide hints how to solve them numerically. Perfect hedging and quadratic hedging are investigated . We study consequences and give an economic interpretation of the fact that a time-delay BSDE may not have a solution or may have multiple solutions.
[ { "type": "D", "before": "We show that many pricing and hedging problems concerning structured products, participating products or variable annuities can be handled by this equations.", "after": null, "start_char_pos": 147, "end_char_pos": 304 }, { "type": "R", "before": "appear", "after": "arise", "start_char_pos": 328, "end_char_pos": 334 }, { "type": "A", "before": null, "after": "past", "start_char_pos": 484, "end_char_pos": 484 }, { "type": "A", "before": null, "after": "In our setting, an investment portfolio serves simultaneously as the underlying security on which the liability is contingent and as a hedge portfolio for the liability.", "start_char_pos": 510, "end_char_pos": 510 }, { "type": "R", "before": "investment funds or life insurance investment contracts which have bonus distribution mechanisms or provide protection against low returns", "after": "capital protected funds and performance linked pay-offs", "start_char_pos": 540, "end_char_pos": 678 }, { "type": "R", "before": "some life insurance products,", "after": "pay-offs arising under participating contracts, variable annuities, structured products and hedge funds with hurdle rates and high water marks. We", "start_char_pos": 693, "end_char_pos": 722 }, { "type": "R", "before": "We investigate perfect", "after": "Perfect", "start_char_pos": 850, "end_char_pos": 872 }, { "type": "R", "before": "which is crucial for insurance applications", "after": "are investigated", "start_char_pos": 903, "end_char_pos": 946 } ]
[ 0, 146, 304, 509, 680, 849, 948 ]
1005.4417
2
In this paper we investigate novel applications of a new class of equations which we call time-delayed backward stochastic differential equations. Time-delayed BSDEs may arise when we want to find a strategy and a portfolio which should replicate the liability whose pay-off depends on the applied investment strategy or the past values of the portfolio. In our setting, an investment portfolio serves simultaneously as the underlying security on which the liability is contingent and as a hedge portfolio for the liability . This is usually the case for capital protected funds and performance linked pay-offs. We consider pay-offs arising under participating contracts , variable annuities, structured products and hedge funds with hurdle rates and high water marks . We derive the corresponding time-delayed BSDEs and solve them explicitly or at least provide hints how to solve them numerically. Perfect hedging and quadratic hedging are investigated. We study consequences and give an economic interpretation of the fact that a time-delay BSDE may not have a solution or may have multiple solutions.
In this paper we investigate novel applications of a new class of equations which we call time-delayed backward stochastic differential equations. Time-delayed BSDEs may arise in finance when we want to find an investment strategy and an investment portfolio which should replicate a liability or meet a target depending on the applied strategy or the past values of the portfolio. In this setting, a managed investment portfolio serves simultaneously as the underlying security on which the liability /target is contingent and as a replicating portfolio for that liability/target . This is usually the case for capital-protected investments and performance-linked pay-offs. We give examples of pricing, hedging and portfolio management problems (asset-liability management problems) which could be investigated in the framework of time-delayed BSDEs. Our motivation comes from life insurance and we focus on participating contracts and variable annuities . We derive the corresponding time-delayed BSDEs and solve them explicitly or at least provide hints how to solve them numerically. We give a financial interpretation of the theoretical fact that a time-delayed BSDE may not have a solution or may have multiple solutions.
[ { "type": "A", "before": null, "after": "in finance", "start_char_pos": 176, "end_char_pos": 176 }, { "type": "R", "before": "a strategy and a", "after": "an investment strategy and an investment", "start_char_pos": 198, "end_char_pos": 214 }, { "type": "R", "before": "the liability whose pay-off depends", "after": "a liability or meet a target depending", "start_char_pos": 248, "end_char_pos": 283 }, { "type": "D", "before": "investment", "after": null, "start_char_pos": 299, "end_char_pos": 309 }, { "type": "R", "before": "our setting, an", "after": "this setting, a managed", "start_char_pos": 359, "end_char_pos": 374 }, { "type": "A", "before": null, "after": "/target", "start_char_pos": 468, "end_char_pos": 468 }, { "type": "R", "before": "hedge portfolio for the liability", "after": "replicating portfolio for that liability/target", "start_char_pos": 492, "end_char_pos": 525 }, { "type": "R", "before": "capital protected funds and performance linked", "after": "capital-protected investments and performance-linked", "start_char_pos": 557, "end_char_pos": 603 }, { "type": "R", "before": "consider pay-offs arising under participating contracts , variable annuities, structured products and hedge funds with hurdle rates and high water marks", "after": "give examples of pricing, hedging and portfolio management problems (asset-liability management problems) which could be investigated in the framework of time-delayed BSDEs. Our motivation comes from life insurance and we focus on participating contracts and variable annuities", "start_char_pos": 617, "end_char_pos": 769 }, { "type": "R", "before": "Perfect hedging and quadratic hedging are investigated. We study consequences and give an economic", "after": "We give a financial", "start_char_pos": 902, "end_char_pos": 1000 }, { "type": "A", "before": null, "after": "theoretical", "start_char_pos": 1023, "end_char_pos": 1023 }, { "type": "R", "before": "time-delay", "after": "time-delayed", "start_char_pos": 1036, "end_char_pos": 1046 } ]
[ 0, 146, 355, 527, 613, 771, 901, 957 ]
1006.0155
1
We propose a simple stochastic model for time series which is analytically tractable, easy to simulate and which captures some relevant stylized facts of financial indexes , including scaling properties. We show that the model fits the Dow Jones Industrial Average timeseries in the period 1935-2009 with a remarkable accuracy . Despite its simplicity , the model has several interesting features . The volatility is not constant and displays high peaks. The empirical distribution of the log-returns (increments of the logarithm of the index) is non-Gaussian and may exhibit heavy tails. Log-returns corresponding to disjoint time intervals are uncorrelated but not independent: the correlation of their absolute values decays exponentially fast in the distance between the time intervals for large distances, while it has a slower decay for moderate distances. Finally, the distribution of the log-returns obeys scaling relations that are detected on real time series, but are not satisfied by most available models .
We propose a simple stochastic volatility model which is analytically tractable, very easy to simulate and which captures some relevant stylized facts of financial assets , including scaling properties. In particular, the model displays a crossover in the log-return distribution from power-law tails (small time) to a Gaussian behavior (large time), slow decay in the volatility autocorrelation and multiscaling of moments . Despite its few parameters , the model is able to fit several key features of the time series of financial indexes, such as the Dow Jones Industrial Average, with a remarkable accuracy .
[ { "type": "R", "before": "model for time series", "after": "volatility model", "start_char_pos": 31, "end_char_pos": 52 }, { "type": "A", "before": null, "after": "very", "start_char_pos": 86, "end_char_pos": 86 }, { "type": "R", "before": "indexes", "after": "assets", "start_char_pos": 165, "end_char_pos": 172 }, { "type": "R", "before": "We show that the model fits the Dow Jones Industrial Average timeseries in the period 1935-2009 with a remarkable accuracy", "after": "In particular, the model displays a crossover in the log-return distribution from power-law tails (small time) to a Gaussian behavior (large time), slow decay in the volatility autocorrelation and multiscaling of moments", "start_char_pos": 205, "end_char_pos": 327 }, { "type": "R", "before": "simplicity", "after": "few parameters", "start_char_pos": 342, "end_char_pos": 352 }, { "type": "R", "before": "has several interesting features . The volatility is not constant and displays high peaks. The empirical distribution of the log-returns (increments of the logarithm of the index) is non-Gaussian and may exhibit heavy tails. Log-returns corresponding to disjoint time intervals are uncorrelated but not independent: the correlation of their absolute values decays exponentially fast in the distance between the time intervals for large distances, while it has a slower decay for moderate distances. Finally, the distribution of the log-returns obeys scaling relations that are detected on real time series, but are not satisfied by most available models", "after": "is able to fit several key features of the time series of financial indexes, such as the Dow Jones Industrial Average, with a remarkable accuracy", "start_char_pos": 365, "end_char_pos": 1018 } ]
[ 0, 204, 329, 399, 455, 589, 863 ]
1006.0271
1
The second law of thermodynamics implies that no macroscopic system may oscillate indefinitely without consuming energy. This letter places bounds on the degree and quality of such oscillations when the system in question is homogeneous and has discrete states. In a closed system, the maximum number of oscillations is bounded by the system dimension. In open systems, the system size bounds the quality factor of oscillation. This fundamental limit serves as a key design principle for engineered dynamics in chemical systems and nano-scale machines .
The second law of thermodynamics implies that no macroscopic system may oscillate indefinitely without consuming energy. The question of the number of possible oscillations and the coherent quality of those oscillations remain unanswered. This paper proves the upper-bounds on the number and quality of such oscillations when the system in question is homogeneously driven and has discrete states. In a closed system, the maximum number of oscillations is bounded by the system dimension. In open systems, the system size bounds the quality factor of oscillation. I also prove that homogeneously driven systems with discrete states must have a loop topology to allow oscillations. The consequences of this limit are explored in the context of chemical clocks and limit cycles .
[ { "type": "R", "before": "This letter places bounds on the degree", "after": "The question of the number of possible oscillations and the coherent quality of those oscillations remain unanswered. This paper proves the upper-bounds on the number", "start_char_pos": 121, "end_char_pos": 160 }, { "type": "R", "before": "homogeneous", "after": "homogeneously driven", "start_char_pos": 225, "end_char_pos": 236 }, { "type": "R", "before": "This fundamental limit serves as a key design principle for engineered dynamics in chemical systems and nano-scale machines", "after": "I also prove that homogeneously driven systems with discrete states must have a loop topology to allow oscillations. The consequences of this limit are explored in the context of chemical clocks and limit cycles", "start_char_pos": 428, "end_char_pos": 551 } ]
[ 0, 120, 261, 352, 427 ]
1006.0271
2
The second law of thermodynamics implies that no macroscopic system may oscillate indefinitely without consuming energy. The questions of the number of possible oscillations and the coherent quality of those oscillations remain unanswered. This paper proves the upper-bounds on the number and quality of such oscillations when the system in question is homogeneously driven and has discrete states. In a closed system, the maximum number of oscillations is bounded by the system dimension . In open systems, the system size bounds the quality factor of oscillation. I also prove that homogeneously driven systems with discrete states must have a loop topology to allow oscillations . The consequences of this limit are explored in the context of chemical clocks and limit cycles.
The second law of thermodynamics implies that no macroscopic system may oscillate indefinitely without consuming energy. The question of the number of possible oscillations and the coherent quality of these oscillations remain unanswered. This paper proves the upper-bounds on the number and quality of such oscillations when the system in question is homogeneously driven and has a discrete network of states. In a closed system, the maximum number of oscillations is bounded by the number of states in the network . In open systems, the size of the network bounds the quality factor of oscillation. This work also explores how the quality factor of macrostate oscillations, such as would be observed in chemical reactions, are bounded by the smallest equivalent loop of the network, not the size of the entire system . The consequences of this limit are explored in the context of chemical clocks and limit cycles.
[ { "type": "R", "before": "questions", "after": "question", "start_char_pos": 125, "end_char_pos": 134 }, { "type": "R", "before": "those", "after": "these", "start_char_pos": 202, "end_char_pos": 207 }, { "type": "R", "before": "discrete", "after": "a discrete network of", "start_char_pos": 382, "end_char_pos": 390 }, { "type": "R", "before": "system dimension", "after": "number of states in the network", "start_char_pos": 472, "end_char_pos": 488 }, { "type": "R", "before": "system size", "after": "size of the network", "start_char_pos": 512, "end_char_pos": 523 }, { "type": "R", "before": "I also prove that homogeneously driven systems with discrete states must have a loop topology to allow oscillations", "after": "This work also explores how the quality factor of macrostate oscillations, such as would be observed in chemical reactions, are bounded by the smallest equivalent loop of the network, not the size of the entire system", "start_char_pos": 566, "end_char_pos": 681 } ]
[ 0, 120, 239, 398, 490, 565, 683 ]
1006.0611
1
Ribosome is a molecular machine that moves on a mRNA track while, simultaneously, polymerizing a protein using the mRNA also as the corresponding template. We introduce quantitative measures of its performance which characterize, for example, the speed and fidelity of the template-dictated polymerization. We also define two different measures of efficiency and strength of mechano-chemical coupling of this molecular machine. We calculate all these quantities analytically . Some of these quantities show apparently counterintuitive trends of variation with the qualityof kinetic proofreading. We interpret the origin of these trends . We suggest new experiments for testing some of the ideas presented here.
Ribosome is a molecular machine that moves on a mRNA track while, simultaneously, polymerizing a protein using the mRNA also as the corresponding template. We define, and analytically calculate, two different measures of the efficiency of this machine . However, we arugue that its performance is evaluated better in terms of the translational fidelity and the speed with which it polymerizes a protein. We define both these quantities and calculate these analytically. Fidelity is a measure of the quality of the products while the total quantity of products synthesized in a given interval depends on the speed of polymerization. We show that for synthesizing a large quantity of proteins, it is not necessary to sacrifice the quality . We also explore the effects of the quality control mechanism on the strength of mechano-chemical coupling. We suggest experiments for testing some of the ideas presented here.
[ { "type": "R", "before": "introduce quantitative measures of its performance which characterize, for example, the speed and fidelity of the template-dictated polymerization. We also define", "after": "define, and analytically calculate,", "start_char_pos": 159, "end_char_pos": 321 }, { "type": "R", "before": "efficiency and strength of mechano-chemical coupling of this molecular machine. We calculate all these quantities analytically", "after": "the efficiency of this machine", "start_char_pos": 348, "end_char_pos": 474 }, { "type": "R", "before": "Some of these quantities show apparently counterintuitive trends of variation with the qualityof kinetic proofreading. We interpret the origin of these trends", "after": "However, we arugue that its performance is evaluated better in terms of the translational fidelity and the speed with which it polymerizes a protein. We define both these quantities and calculate these analytically. Fidelity is a measure of the quality of the products while the total quantity of products synthesized in a given interval depends on the speed of polymerization. We show that for synthesizing a large quantity of proteins, it is not necessary to sacrifice the quality", "start_char_pos": 477, "end_char_pos": 635 }, { "type": "A", "before": null, "after": "also explore the effects of the quality control mechanism on the strength of mechano-chemical coupling. We", "start_char_pos": 641, "end_char_pos": 641 }, { "type": "D", "before": "new", "after": null, "start_char_pos": 650, "end_char_pos": 653 } ]
[ 0, 155, 306, 427, 595, 637 ]
1006.0727
1
Nature uses combinatorial control strategies that are different from those used by present pharmacological approaches for controlling disease. Function and differentiation of cells URLanisms are naturally regulated by control networks that are bipartite, i.e. contain two different types of nodes: controllers and targets. Each controller acts on many targets, and target nodes are controlled by many controllers in a strongly overlapping many-to-many network structure. We present a quantitative analysis of the network properties of three key biological systems for which sufficiently comprehensive datasets are now available: transcription factors, microRNAs and protein kinases . We find that some parameters defining general network properties vary only within limited ranges , suggesting the existence of common control strategies in biology. The \pm many-to-many structure might permit higher robustness to variation. A mathematical model also provides insight into the factors determining the values of the biological control network parameters and the network structure. The model showed an increased probability of finding solutions that achieve control within the naturally occurring range. This analysis of biological control networkssuggests a new paradigm for pharmacological interventions. Combinatorial therapies could be found by searching within biomimetic pharmacological sets with the same many-to-many structure as the biological systems and parameters within the ranges we observed in biology. The molecular tools for testing this new approach have recently become available . The necessary experiments would be greatly accelerated by a concerted effort of several pharmaceutical and biotech companies willing to pool compounds. This effort has potential benefits for the therapies of complex diseases, for the pharmaceutical industry and for our basic understanding of biological control .
Cells are regulated by networks of controllers having many targets, and targets affected by many controllers , but these " many-to-many " combinatorial control systems are poorly understood. Here we analyze distinct cellular networks ( transcription factors, microRNAs , and protein kinases ) and a drug-target network. Certain network properties seem universal across systems and species , suggesting the existence of common control strategies in biology. The number of controllers is ~8\% of targets and the density of links is 2.5\%\pm 1.2\%. Links per node are predominantly exponentially distributed, implying conservation of the average, which we explain using a mathematical model of robustness in control networks. These findings suggest that optimal pharmacological strategies may benefit from a similar, many-to-many combinatorial structure, and molecular tools are available to test this approach .
[ { "type": "R", "before": "Nature uses combinatorial control strategies that are different from those used by present pharmacological approaches for controlling disease. Function and differentiation of cells URLanisms are naturally regulated by control networks that are bipartite, i.e. contain two different types of nodes: controllers and targets. Each controller acts on", "after": "Cells are regulated by networks of controllers having", "start_char_pos": 0, "end_char_pos": 346 }, { "type": "R", "before": "target nodes are controlled", "after": "targets affected", "start_char_pos": 365, "end_char_pos": 392 }, { "type": "R", "before": "in a strongly overlapping", "after": ", but these \"", "start_char_pos": 413, "end_char_pos": 438 }, { "type": "R", "before": "network structure. We present a quantitative analysis of the network properties of three key biological systems for which sufficiently comprehensive datasets are now available:", "after": "\" combinatorial control systems are poorly understood. Here we analyze distinct cellular networks (", "start_char_pos": 452, "end_char_pos": 628 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 662, "end_char_pos": 662 }, { "type": "R", "before": ". We find that some parameters defining general network properties vary only within limited ranges", "after": ") and a drug-target network. Certain network properties seem universal across systems and species", "start_char_pos": 683, "end_char_pos": 781 }, { "type": "A", "before": null, "after": "number of controllers is ~8\\% of targets and the density of links is 2.5\\%", "start_char_pos": 854, "end_char_pos": 854 }, { "type": "A", "before": null, "after": "1.2\\%. Links per node are predominantly exponentially distributed, implying conservation of the average, which we explain using a mathematical model of robustness in control networks. These findings suggest that optimal pharmacological strategies may benefit from a similar,", "start_char_pos": 858, "end_char_pos": 858 }, { "type": "R", "before": "structure might permit higher robustness to variation. A mathematical model also provides insight into the factors determining the values of the biological control network parameters and the network structure. The model showed an increased probability of finding solutions that achieve control within the naturally occurring range. This analysis of biological control networkssuggests a new paradigm for pharmacological interventions. Combinatorial therapies could be found by searching within biomimetic pharmacological sets with the same many-to-many structure as the biological systems and parameters within the ranges we observed in biology. The molecular tools for testing this new approach have recently become available . The necessary experiments would be greatly accelerated by a concerted effort of several pharmaceutical and biotech companies willing to pool compounds. This effort has potential benefits for the therapies of complex diseases, for the pharmaceutical industry and for our basic understanding of biological control", "after": "combinatorial structure, and molecular tools are available to test this approach", "start_char_pos": 872, "end_char_pos": 1912 } ]
[ 0, 142, 322, 470, 684, 849, 926, 1081, 1203, 1306, 1517, 1600, 1752 ]
1006.1350
1
We define a copula process which describes the dependencies between arbitrarily many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility (GCPV), to predict the latent standard deviations of a sequence of random variables. To learn the parameters of GCPV we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find both methods comparable. We also find our model can outperform GARCH , on simulated and financial data. And unlike GARCH, GCPV can easily handle missing data, incorporate covariates other than time, and model a rich class of covariance structures.
We define a copula process which describes the dependencies between arbitrarily many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility (GCPV), to predict the latent standard deviations of a sequence of random variables. To make predictions we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find both methods comparable. We also find our model can outperform GARCH on simulated and financial data. And unlike GARCH, GCPV can easily handle missing data, incorporate covariates other than time, and model a rich class of covariance structures.
[ { "type": "R", "before": "learn the parameters of GCPV", "after": "make predictions", "start_char_pos": 329, "end_char_pos": 357 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 547, "end_char_pos": 548 } ]
[ 0, 148, 325, 469, 502, 581 ]
1006.2012
1
This paper considers a top-down approach for CDO valuation and proposes a market model. We extend previous research on this topic in two directions: on the one side, we use as driving process for the interest rate dynamics a time-inhomogeneous L\'evy process, and on the other side, we do not assume that all maturities are available in the market. Only a discrete tenor structure is considered, which is in the spirit of the classical Libor market model. We create a general frameworkfor market models based on multidimensional semimartingales. This framework is able to capture dependence between the default-free and the defaultable dynamics, as well as contagion effects. Conditions for absence of arbitrage and valuation formulas for tranches of CDOs are given .
The goal of this paper is to specify dynamic term structure models with discrete tenor structure for credit portfolios in a top-down setting driven by time-inhomogeneous L\'evy processes. We provide a new framework, conditions for absence of arbitrage , explicit examples, an affine setup which includes contagion and pricing formulas for STCDOs and options on STCDOs. A calibration to iTraxx data with an extended Kalman filter shows an excellent fit over the full observation period. The calibration is done on a set of CDO tranche spreads ranging across six tranches and three maturities .
[ { "type": "R", "before": "This paper considers a", "after": "The goal of this paper is to specify dynamic term structure models with discrete tenor structure for credit portfolios in a", "start_char_pos": 0, "end_char_pos": 22 }, { "type": "R", "before": "approach for CDO valuation and proposes a market model. We extend previous research on this topic in two directions: on the one side, we use as driving process for the interest rate dynamics a", "after": "setting driven by", "start_char_pos": 32, "end_char_pos": 224 }, { "type": "R", "before": "process, and on the other side, we do not assume that all maturities are available in the market. Only a discrete tenor structure is considered, which is in the spirit of the classical Libor market model. We create a general frameworkfor market models based on multidimensional semimartingales. This framework is able to capture dependence between the default-free and the defaultable dynamics, as well as contagion effects. Conditions", "after": "processes. We provide a new framework, conditions", "start_char_pos": 251, "end_char_pos": 686 }, { "type": "R", "before": "and valuation formulas for tranches of CDOs are given", "after": ", explicit examples, an affine setup which includes contagion and pricing formulas for STCDOs and options on STCDOs. A calibration to iTraxx data with an extended Kalman filter shows an excellent fit over the full observation period. The calibration is done on a set of CDO tranche spreads ranging across six tranches and three maturities", "start_char_pos": 712, "end_char_pos": 765 } ]
[ 0, 87, 348, 455, 545, 675 ]
1006.2273
1
We consider option pricing in a regime-switching market. As the market is incomplete, there is no unique price for a derivative. We apply the good-deal bounds idea to obtain ranges for the price of a derivative. As an illustration, we calculate the good-deal pricing bounds for a European call option . We examine the stability of the good-deal pricing bounds for the European call option when we change the market model's parameters . We find that the pricing bounds depend strongly on the market parameters .
We consider option pricing in a regime-switching diffusion market. As the market is incomplete, there is no unique price for a derivative. We apply the good-deal pricing bounds idea to obtain ranges for the price of a derivative. As an illustration, we calculate the good-deal pricing bounds for a European call option and we also examine the stability of these bounds when we change the generator of the Markov chain which drives the regime-switching . We find that the pricing bounds depend strongly on the choice of the generator .
[ { "type": "A", "before": null, "after": "diffusion", "start_char_pos": 49, "end_char_pos": 49 }, { "type": "A", "before": null, "after": "pricing", "start_char_pos": 153, "end_char_pos": 153 }, { "type": "R", "before": ". We", "after": "and we also", "start_char_pos": 303, "end_char_pos": 307 }, { "type": "R", "before": "the good-deal pricing bounds for the European call option", "after": "these bounds", "start_char_pos": 333, "end_char_pos": 390 }, { "type": "R", "before": "market model's parameters", "after": "generator of the Markov chain which drives the regime-switching", "start_char_pos": 410, "end_char_pos": 435 }, { "type": "R", "before": "market parameters", "after": "choice of the generator", "start_char_pos": 493, "end_char_pos": 510 } ]
[ 0, 57, 129, 213, 304, 437 ]
1006.2313
1
In this paper, flow models of networks without congestion control are considered. We suppose that users transmit data in the network at their maximum throughput and some erasure codes make the transmission robust to packet loss . We study the stability of the resulting stochastic processes in two particular cases: linear networks and upstream trees. For the case of linear networks, we notably use fluid limits and an interesting phenomenon of "time scale separation" is occurring. Tight bounds on the stability region of linear networks are given. For the case of upstream trees, underlying monotonic properties are used. Finally, the asymptotic stability of those processes is analyzed when the maximum throughput of the users decreases to 0. An appropriate scaling is introduced and used to prove that the stability region of thoses networks is asymptotically maximized.
In this paper, flow models of networks without congestion control are considered. Users generate data transfers according to some Poisson processes and transmit corresponding packet at a fixed rate equal to their access rate until the entire document is received at the destination; some erasure codes are used to make the transmission robust to packet losses . We study the stability of the stochastic process representing the number of active flows in two particular cases: linear networks and upstream trees. For the case of linear networks, we notably use fluid limits and an interesting phenomenon of "time scale separation" occurs. Bounds on the stability region of linear networks are given. For the case of upstream trees, underlying monotonic properties are used. Finally, the asymptotic stability of those processes is analyzed when the access rate of the users decreases to 0. An appropriate scaling is introduced and used to prove that the stability region of those networks is asymptotically maximized.
[ { "type": "R", "before": "We suppose that users transmit data in the network at their maximum throughput and", "after": "Users generate data transfers according to some Poisson processes and transmit corresponding packet at a fixed rate equal to their access rate until the entire document is received at the destination;", "start_char_pos": 82, "end_char_pos": 164 }, { "type": "A", "before": null, "after": "are used to", "start_char_pos": 184, "end_char_pos": 184 }, { "type": "R", "before": "loss", "after": "losses", "start_char_pos": 224, "end_char_pos": 228 }, { "type": "R", "before": "resulting stochastic processes", "after": "stochastic process representing the number of active flows", "start_char_pos": 261, "end_char_pos": 291 }, { "type": "R", "before": "is occurring. Tight bounds", "after": "occurs. Bounds", "start_char_pos": 471, "end_char_pos": 497 }, { "type": "R", "before": "maximum throughput", "after": "access rate", "start_char_pos": 700, "end_char_pos": 718 }, { "type": "R", "before": "thoses", "after": "those", "start_char_pos": 832, "end_char_pos": 838 } ]
[ 0, 81, 230, 352, 484, 551, 625, 747 ]
1006.2327
1
The quite recent technological rise in molecular biology allowed single molecule manipulation experiments, where molecule stretching plays a primary role. In order to understand the experimental data, it is felt the urge of some physical and mathematical models to quantitatively express the mechanical properties of the observed molecules. In this paper we reconsider a simple phenomenological model which reproduces the behaviour of a molecule of double stranded DNA (dsDNA) under tension. The problem is easily solved via the cavity method both in the small forces range and in presence of overstretching transition, so that some properties such as bending stiffness and elasticity of dsDNA emerge in a very clear manner. Our theoretical findings are successfully fitted to real measurements and compared to Monte Carlo simulations, confirming the quality of the approach.
The quite recent technological rise in molecular biology allowed single molecule manipulation experiments, where molecule stretching plays a primary role. In order to understand the experimental data, it is felt the urge of some physical and mathematical models to quantitatively express the mechanical properties of the observed molecules. In this paper we reconsider a simple phenomenological model which reproduces the behaviour of single and double stranded DNA under tension. The problem is easily solved via the cavity method both in the small forces range and in presence of overstretching transition, so that some properties such as bending stiffness and elasticity of DNA emerge in a very clear manner. Our theoretical findings are successfully fitted to real measurements and compared to Monte Carlo simulations, confirming the quality of the approach.
[ { "type": "R", "before": "a molecule of", "after": "single and", "start_char_pos": 435, "end_char_pos": 448 }, { "type": "D", "before": "(dsDNA)", "after": null, "start_char_pos": 469, "end_char_pos": 476 }, { "type": "R", "before": "dsDNA", "after": "DNA", "start_char_pos": 688, "end_char_pos": 693 } ]
[ 0, 154, 340, 491, 724 ]
1006.2489
1
Properties of arbitrary truncated Levy flight are investigated by method of cumulant approach . The set of cumulants that characterized an arbitrary truncated Levy distribution is found and their shape of truncation dependence is defined . The influence of truncation shape on the properties of Gaussian and Levy regimes of process is investigated.
The problem of an arbitrary truncated Levy flight description using the method of cumulant approach has been solved . The set of cumulants of the truncated Levy distribution given the assumption of arbitrary truncation has been found . The influence of truncation shape on the truncated Levy flight properties in the Gaussian and the Levy regimes has been investigated.
[ { "type": "R", "before": "Properties of", "after": "The problem of an", "start_char_pos": 0, "end_char_pos": 13 }, { "type": "R", "before": "are investigated by", "after": "description using the", "start_char_pos": 46, "end_char_pos": 65 }, { "type": "A", "before": null, "after": "has been solved", "start_char_pos": 94, "end_char_pos": 94 }, { "type": "R", "before": "that characterized an arbitrary", "after": "of the", "start_char_pos": 118, "end_char_pos": 149 }, { "type": "R", "before": "is found and their shape of truncation dependence is defined", "after": "given the assumption of arbitrary truncation has been found", "start_char_pos": 178, "end_char_pos": 238 }, { "type": "R", "before": "properties of Gaussian and Levy regimes of process is", "after": "truncated Levy flight properties in the Gaussian and the Levy regimes has been", "start_char_pos": 282, "end_char_pos": 335 } ]
[ 0, 96, 240 ]
1006.2634
1
The notion of autocatalysis actually covers a large variety of mechanistic realisations of chemical systems . From the most general definition of autocatalysis, that is a process in which a chemical compound is able to catalyze its own formation, several different systems can be described. We detail the different categories of autocatalyses, and compare them on the basis of their mechanistic, kinetic, and dynamic properties. It is proposed that the key signature of autocatalysis is its kinetic pattern expressed in a mathematical form . It will be shown how such a pattern can be generated by different systems of chemical reactions .
Autocatalysis is a fundamental concept, used in a wide range of domains . From the most general definition of autocatalysis, that is a process in which a chemical compound is able to catalyze its own formation, several different systems can be described. We detail the different categories of autocatalyses, and compare them on the basis of their mechanistic, kinetic, and dynamic properties. It is shown how autocatalytic patterns can be generated by different systems of chemical reactions. The notion of autocatalysis covering a large variety of mechanistic realisations with very similar behaviors, it is proposed that the key signature of autocatalysis is its kinetic pattern expressed in a mathematical form .
[ { "type": "R", "before": "The notion of autocatalysis actually covers a large variety of mechanistic realisations of chemical systems", "after": "Autocatalysis is a fundamental concept, used in a wide range of domains", "start_char_pos": 0, "end_char_pos": 107 }, { "type": "A", "before": null, "after": "shown how autocatalytic patterns can be generated by different systems of chemical reactions. The notion of autocatalysis covering a large variety of mechanistic realisations with very similar behaviors, it is", "start_char_pos": 435, "end_char_pos": 435 }, { "type": "D", "before": ". It will be shown how such a pattern can be generated by different systems of chemical reactions", "after": null, "start_char_pos": 541, "end_char_pos": 638 } ]
[ 0, 109, 290, 428, 542 ]
1006.2761
1
The evolution of protein-protein interactions over time has led to a complex network whose character is modular in the cellular function and highly correlated in its connectivity. The question of the characterization and emergence of modularity following principles of evolution remains an important challenge as there is no encompassing theory to explain the resulting modular topology. Here, we perform an empirical study of the yeast protein-interaction network . We find a novel large-scale URLanization of the functional classes of proteins characterized in terms of scale-invariant laws of modularity. We develop a mathematical framework and demonstrate a relationship between the modular structure and the evolution growth rate of the interactions, conserved proteins , and topological length-scales in the system revealing a hierarchy of mutational events giving rise to the modular topology. These results are expected to apply to other complex networks providing a general theoretical framework to describe their URLanization and dynamics.
Cellular functions are based on the complex interplay of proteins, therefore the structure and dynamics of these protein-protein interaction (PPI) networks are the key to the functional understanding of cells. In the last years, large-scale PPI networks of several URLanisms were investigated. Methodological improvements now allow the analysis of PPI networks of URLanisms simultaneously as well as the direct modeling of ancestral networks. This provides the opportunity to challenge existing assumptions on network evolution. We utilized present-day PPI networks from integrated datasets of seven URLanisms and developed a theoretical and bioinformatic framework for studying the evolutionary dynamics of PPI networks. A novel filtering approach using percolation analysis was developed to remove low confidence interactions based on topological constraints. We then reconstructed the ancient PPI networks of different ancestors, for which the ancestral proteomes, as well as the ancestral interactions, were inferred. Ancestral proteins were reconstructed using orthologous groups on different evolutionary levels. A stochastic approach, using the duplication-divergence model, was developed for estimating the probabilities of ancient interactions from today's PPI networks. The growth rates for nodes, edges, sizes and modularities of the networks indicate multiplicative growth and are consistent with the results from independent static analysis. Our results support the duplication-divergence model of evolution and indicate fractality and multiplicative growth as general properties of the PPI network structure and dynamics.
[ { "type": "R", "before": "The evolution of", "after": "Cellular functions are based on the complex interplay of proteins, therefore the structure and dynamics of these", "start_char_pos": 0, "end_char_pos": 16 }, { "type": "R", "before": "interactions over time has led to a complex network whose character is modular in the cellular function and highly correlated in its connectivity. The question of", "after": "interaction (PPI) networks are the key to", "start_char_pos": 33, "end_char_pos": 195 }, { "type": "R", "before": "characterization and emergence of modularity following principles of evolution remains an important challenge as there is no encompassing theory to explain the resulting modular topology. Here, we perform an empirical study of the yeast protein-interaction network . We find a novel", "after": "functional understanding of cells. In the last years,", "start_char_pos": 200, "end_char_pos": 482 }, { "type": "R", "before": "URLanization of the functional classes of proteins characterized in terms of scale-invariant laws of modularity. We develop a mathematical framework and demonstrate a relationship between the modular structure and the evolution growth rate of the interactions, conserved proteins , and topological length-scales in the system revealing a hierarchy of mutational events giving rise to the modular topology. These results are expected to apply to other complex networks providing a general theoretical framework to describe their URLanization", "after": "PPI networks of several URLanisms were investigated. Methodological improvements now allow the analysis of PPI networks of URLanisms simultaneously as well as the direct modeling of ancestral networks. This provides the opportunity to challenge existing assumptions on network evolution. We utilized present-day PPI networks from integrated datasets of seven URLanisms and developed a theoretical and bioinformatic framework for studying the evolutionary dynamics of PPI networks. A novel filtering approach using percolation analysis was developed to remove low confidence interactions based on topological constraints. We then reconstructed the ancient PPI networks of different ancestors, for which the ancestral proteomes, as well as the ancestral interactions, were inferred. Ancestral proteins were reconstructed using orthologous groups on different evolutionary levels. A stochastic approach, using the duplication-divergence model, was developed for estimating the probabilities of ancient interactions from today's PPI networks. The growth rates for nodes, edges, sizes and modularities of the networks indicate multiplicative growth and are consistent with the results from independent static analysis. Our results support the duplication-divergence model of evolution and indicate fractality and multiplicative growth as general properties of the PPI network structure", "start_char_pos": 495, "end_char_pos": 1035 } ]
[ 0, 179, 387, 466, 607, 900 ]
1006.3224
1
Our goal is to resolve a problem proposed by Karatzas and Fernholz (2008) : Characterizing the minimum amount of initial capital that would guarantee the investor to beat the market portfolio with a certain probability as a function of the market configuration and time to maturity. We show that this value function is the smallest supersolution of a non-linear PDE. As in Karatzas and Fernholz (2008) , we do not assume the existence of an equivalent local martingale measure but merely the existence of a local martingale deflator.
Our goal is to resolve a problem proposed by Fernholz and Karatzas On optimal arbitrage (2008) Columbia Univ. : to characterize the minimum amount of initial capital with which an investor can beat the market portfolio with a certain probability , as a function of the market configuration and time to maturity. We show that this value function is the smallest nonnegative viscosity supersolution of a nonlinear PDE. As in Fernholz and Karatzas On optimal arbitrage (2008) Columbia Univ. , we do not assume the existence of an equivalent local martingale measure , but merely the existence of a local martingale deflator.
[ { "type": "R", "before": "Karatzas and Fernholz", "after": "Fernholz and Karatzas", "start_char_pos": 45, "end_char_pos": 66 }, { "type": "A", "before": null, "after": "On optimal arbitrage", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "R", "before": ": Characterizing", "after": "Columbia Univ.", "start_char_pos": 75, "end_char_pos": 91 }, { "type": "A", "before": null, "after": ": to characterize", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "R", "before": "that would guarantee the investor to", "after": "with which an investor can", "start_char_pos": 131, "end_char_pos": 167 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 221, "end_char_pos": 221 }, { "type": "A", "before": null, "after": "nonnegative viscosity", "start_char_pos": 335, "end_char_pos": 335 }, { "type": "R", "before": "non-linear", "after": "nonlinear", "start_char_pos": 355, "end_char_pos": 365 }, { "type": "R", "before": "Karatzas and Fernholz", "after": "Fernholz and Karatzas", "start_char_pos": 377, "end_char_pos": 398 }, { "type": "A", "before": null, "after": "On optimal arbitrage", "start_char_pos": 399, "end_char_pos": 399 }, { "type": "A", "before": null, "after": "Columbia Univ.", "start_char_pos": 407, "end_char_pos": 407 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 483, "end_char_pos": 483 } ]
[ 0, 285, 370 ]
1006.3627
1
It is a common expectation in chemistry that a chemical transformation which takes place in the presence of a catalyst must also take place in its absence , though perhaps at a much slower rate. We say a reaction network is " saturated " if it satisfies such an expectation. We prove that the associated dynamical systems of saturated networkshave no relevant siphons , and are therefore permanent. Saturated networks generalize normal networks by Gnacadja, atomic event-systems by Adleman et al. and constructive networks by Shinar et al. Our result is independent of the specific rates,and the deficiency of the network. The question of permanence for weakly-reversible reaction networks remains an important and long-standing open problem .
A "critical siphon" is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point. We define "catalytic networks" as networks with an essentially catalytic reaction pathway: one which is " on " in the presence of certain catalysts and "off" in their absence. We show that synthetic DNA molecular circuits that have been shown to perform signal amplification and molecular logic happen to be examples of catalytic networks. Our main theorem is that all weakly-reversible networks with critical siphons are catalytic. We obtain new proofs for the persistence of atomic event-systems of Adleman et al. , and normal networks of Gnacadja. We define "autocatalytic networks," and conjecture that weakly-reversible reaction networks have critical siphons if and only if they are autocatalytic .
[ { "type": "R", "before": "It is a common expectation in chemistry that a chemical transformation which takes place in the presence of a catalyst must also take place in its absence , though perhaps at a much slower rate. We say a reaction network", "after": "A \"critical siphon\" is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point. We define \"catalytic networks\" as networks with an essentially catalytic reaction pathway: one which", "start_char_pos": 0, "end_char_pos": 220 }, { "type": "R", "before": "saturated", "after": "on", "start_char_pos": 226, "end_char_pos": 235 }, { "type": "R", "before": "if it satisfies such an expectation. We prove that the associated dynamical systems of saturated networkshave no relevant siphons , and are therefore permanent. Saturated networks generalize normal networks by Gnacadja,", "after": "in the presence of certain catalysts and \"off\" in their absence. We show that synthetic DNA molecular circuits that have been shown to perform signal amplification and molecular logic happen to be examples of catalytic networks. Our main theorem is that all weakly-reversible networks with critical siphons are catalytic. We obtain new proofs for the persistence of", "start_char_pos": 238, "end_char_pos": 457 }, { "type": "R", "before": "by", "after": "of", "start_char_pos": 479, "end_char_pos": 481 }, { "type": "R", "before": "and constructive networks by Shinar et al. Our result is independent of the specific rates,and the deficiency of the network. The question of permanence for", "after": ", and normal networks of Gnacadja. We define \"autocatalytic networks,\" and conjecture that", "start_char_pos": 497, "end_char_pos": 653 }, { "type": "R", "before": "remains an important and long-standing open problem", "after": "have critical siphons if and only if they are autocatalytic", "start_char_pos": 690, "end_char_pos": 741 } ]
[ 0, 194, 274, 398, 539, 622 ]
1006.3627
2
A "critical siphon" is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point. We define "catalytic networks " as networks with an essentially catalytic reaction pathway: one which is "on " in the presence of certain catalysts and "off " in their absence. We show that synthetic DNA molecular circuits that have been shown to perform signal amplification and molecular logic happen to be examples of catalytic networks . Our main theorem is that all weakly-reversible networks with critical siphons are catalytic. We obtain new proofs for the persistence of atomic event-systems of Adleman et al., and normal networks of Gnacadja. We define " autocatalytic networks, " and conjecture that weakly-reversible reaction networks have critical siphons if and only if they are autocatalytic.
We define catalytic networks as chemical reaction networks with an essentially catalytic reaction pathway: one which is on in the presence of certain catalysts and off in their absence. We show that examples of catalytic networks include synthetic DNA molecular circuits that have been shown to perform signal amplification and molecular logic . Recall that a critical siphon is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point . Our main theorem is that all weakly-reversible networks with critical siphons are catalytic. Consequently, we obtain new proofs for the persistence of atomic event-systems of Adleman et al., and normal networks of Gnacadja. We define autocatalytic networks, and conjecture that a weakly-reversible reaction network has critical siphons if and only if it is autocatalytic.
[ { "type": "R", "before": "A \"critical siphon\" is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point. We define \"catalytic networks \" as", "after": "We define catalytic", "start_char_pos": 0, "end_char_pos": 204 }, { "type": "A", "before": null, "after": "as chemical reaction networks", "start_char_pos": 214, "end_char_pos": 214 }, { "type": "R", "before": "\"on \"", "after": "on", "start_char_pos": 276, "end_char_pos": 281 }, { "type": "R", "before": "\"off \"", "after": "off", "start_char_pos": 323, "end_char_pos": 329 }, { "type": "A", "before": null, "after": "examples of catalytic networks include", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "happen to be examples of catalytic networks", "after": ". Recall that a critical siphon is a subset of the species in a chemical reaction network whose absence is forward invariant and stoichiometrically compatible with a positive point", "start_char_pos": 468, "end_char_pos": 511 }, { "type": "R", "before": "We", "after": "Consequently, we", "start_char_pos": 607, "end_char_pos": 609 }, { "type": "D", "before": "\"", "after": null, "start_char_pos": 734, "end_char_pos": 735 }, { "type": "D", "before": "\"", "after": null, "start_char_pos": 760, "end_char_pos": 761 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 782, "end_char_pos": 782 }, { "type": "R", "before": "networks have", "after": "network has", "start_char_pos": 810, "end_char_pos": 823 }, { "type": "R", "before": "they are", "after": "it is", "start_char_pos": 856, "end_char_pos": 864 } ]
[ 0, 169, 347, 513, 606, 723 ]
1006.4111
1
We have combined and integrated our previously developed library-based Growth and Monte Carlo simulation techniques in order to allow for both thorough sampling and free energy measurements of all-atom peptides. The integrated growth and relaxation technique makes use of pre-calculated Boltzmann distributed statistical libraries of molecular-fragments and their corresponding energies . Due to this new implementation, we are now able to accurately determine free energies and sample phase space for larger systems that were previously unattainable with library-based growth. We report sampling quality and statistics on free energy measurements for four polypeptide systems that we examined. We discuss possible applications of this new and more general library-based method .
Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via " library-based Monte Carlo") and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric "solvent" and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor is required. The combined approach is formally equivalent to the "annealed importance sampling" algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is "grown." We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site .
[ { "type": "R", "before": "We have combined and integrated our previously developed", "after": "Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via \"", "start_char_pos": 0, "end_char_pos": 56 }, { "type": "R", "before": "Growth and Monte Carlo simulation techniques in order to allow for both thorough sampling and free energy measurements of", "after": "Monte Carlo\") and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of", "start_char_pos": 71, "end_char_pos": 192 }, { "type": "R", "before": "peptides. The integrated growth and relaxation technique makes use of pre-calculated Boltzmann distributed statistical libraries of molecular-fragments and their corresponding energies . Due to this new implementation, we are now able to accurately determine free energies and sample phase space for larger systems that were previously unattainable with library-based growth. We report sampling quality and statistics on free energy measurements for four polypeptide systems that we examined. We discuss possible applications of this new and more general library-based method", "after": "poly-alanine systems in a simple dielectric \"solvent\" and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor is required. The combined approach is formally equivalent to the \"annealed importance sampling\" algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is \"grown.\" We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site", "start_char_pos": 202, "end_char_pos": 777 } ]
[ 0, 211, 388, 577, 694 ]
1007.0026
1
A novel dynamical model for the study of operational risk in banks is proposed. The equation of motion takes into account the interactions among different bank's processes, the spontaneous generation of losses via a noise term and the efforts made by the banks to avoid their occurrence. A scheme for the estimation of some parameters of the model is illustrated, so that it can be tailored on the URLanizational structure of a specific bank . We focus on the case in which there are no causal loops in the matrix of couplings and exploit the exact solution to estimate also the parameters of the noise. The scheme for the estimation of the parameters is proved to be consistent and the model is shown to exhibit a remarkable capability in forecasting future cumulative losses .
A novel dynamical model for the study of operational risk in banks and suitable for the calculation of the Value at Risk (VaR) is proposed. The equation of motion takes into account the interactions among different bank's processes, the spontaneous generation of losses via a noise term and the efforts made by the bank to avoid their occurrence. Since the model is very general, it can be tailored on the URLanizational structure of a specific bank by estimating some of its parameters from historical operational losses. The model is exactly solved in the case in which there are no causal loops in the matrix of couplings and it is shown how the solution can be exploited to estimate also the parameters of the noise. The forecasting power of the model is investigated by using a fraction f of simulated data to estimate the parameters, showing that for f = 0.75 the VaR can be forecast with an error \simeq 10^{-3 .
[ { "type": "A", "before": null, "after": "and suitable for the calculation of the Value at Risk (VaR)", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "R", "before": "banks", "after": "bank", "start_char_pos": 256, "end_char_pos": 261 }, { "type": "R", "before": "A scheme for the estimation of some parameters of the model is illustrated, so that", "after": "Since the model is very general,", "start_char_pos": 289, "end_char_pos": 372 }, { "type": "R", "before": ". We focus on", "after": "by estimating some of its parameters from historical operational losses. The model is exactly solved in", "start_char_pos": 443, "end_char_pos": 456 }, { "type": "R", "before": "exploit the exact solution", "after": "it is shown how the solution can be exploited", "start_char_pos": 532, "end_char_pos": 558 }, { "type": "R", "before": "scheme for the estimation of the parameters is proved to be consistent and the model is shown to exhibit a remarkable capability in forecasting future cumulative losses", "after": "forecasting power of the model is investigated by using a fraction f of simulated data to estimate the parameters, showing that for f = 0.75 the VaR can be forecast with an error \\simeq 10^{-3", "start_char_pos": 609, "end_char_pos": 777 } ]
[ 0, 80, 288, 444, 604 ]
1007.2513
1
Site-specific recombination is an important cellular process that yields a variety of knotted and catenated DNA products on supercoiled circular DNA . Twist knots are some of the most common conformations of these products . They are also one of the simplest families of knots and catenanes. Yet, our systematic understanding of their implication in DNA and important cellular processes like site-specific recombination is very limited. Here we present a topological model of site-specific recombination characterising all possible products of site-specific recombination on twist knot substrates, extending previous work of Buck and Flapan. We illustrate how to use our model to examine previously uncharacterized experimental data. We show how our model can help determine the sequence of products in multiple rounds of processive recombination and distinguish between products of processive and distributive recombination. Companion paper (arXiv:1007.2115v1 math.GT) provides topological proofs for the model presented here .
Site-specific recombination on supercoiled circular DNA molecules can yield a variety of knots and catenanes . Twist knots are some of the most common conformations of these products and they can act as substrates for further rounds of site-specific recombination . They are also one of the simplest families of knots and catenanes. Yet, our systematic understanding of their implication in DNA and important cellular processes like site-specific recombination is very limited. Here we present a topological model of site-specific recombination characterising all possible products of this reaction on twist knot substrates, extending previous work of Buck and Flapan. We illustrate how to use our model to examine previously uncharacterised experimental data. We also show how our model can help determine the sequence of products in multiple rounds of processive recombination and distinguish between products of processive and distributive recombination. This model studies generic site- specific recombination on arbitrary twist knot substrates, a subject for which there is limited global understanding. We also provide a systematic method of applying our model to a variety of different recombination systems .
[ { "type": "D", "before": "is an important cellular process that yields a variety of knotted and catenated DNA products", "after": null, "start_char_pos": 28, "end_char_pos": 120 }, { "type": "A", "before": null, "after": "molecules can yield a variety of knots and catenanes", "start_char_pos": 149, "end_char_pos": 149 }, { "type": "A", "before": null, "after": "and they can act as substrates for further rounds of site-specific recombination", "start_char_pos": 224, "end_char_pos": 224 }, { "type": "R", "before": "site-specific recombination", "after": "this reaction", "start_char_pos": 546, "end_char_pos": 573 }, { "type": "R", "before": "uncharacterized", "after": "uncharacterised", "start_char_pos": 701, "end_char_pos": 716 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 739, "end_char_pos": 739 }, { "type": "R", "before": "Companion paper (arXiv:1007.2115v1 math.GT) provides topological proofs for the model presented here", "after": "This model studies generic site- specific recombination on arbitrary twist knot substrates, a subject for which there is limited global understanding. We also provide a systematic method of applying our model to a variety of different recombination systems", "start_char_pos": 929, "end_char_pos": 1029 } ]
[ 0, 226, 293, 438, 643, 735, 928, 969 ]
1007.2668
1
How do living cells achieve sufficient abundances of functional protein complexes while minimizing promiscuous non-functional interactions between their proteins ? Here we study this problem using a first-principle model of the cell whose phenotypic traits are directly determined from its genome through biophysical properties of protein structures and binding interactions in crowded cellular environment. The model cell includes three independent pathways, whose topologies of PPI subnetworks are different, but whose functional concentrations equally contribute to cell's fitness . The model cells evolve through genotypic mutations and phenotypic protein copy number variations. We found a strong relationship between evolved physical-chemical properties of protein interactions and their abundances due to a "frustration" effect: strengthening of functional interactions brings about hydrophobic surfaces , which make proteins prone to promiscuous binding. The balancing act is achieved by lowering concentrations of hub proteins while raising solubilities and abundances of functional monomers. The non-monotonic relation between abundances and Protein-Protein Interaction network degrees of yeast proteins validates our predictions. Furthermore, in agreement with our model we found that highly abundant yeast proteins show a positive correlation between their degree and dosage sensitivity with respect to overexpression .
How do living cells achieve sufficient abundances of functional protein complexes while minimizing promiscuous non-functional interactions ? Here we study this problem using a first-principle model of the cell whose phenotypic traits are directly determined from its genome through biophysical properties of protein structures and binding interactions in crowded cellular environment. The model cell includes three independent prototypical pathways, whose topologies of Protein-Protein Interaction (PPI) sub-networks are different, but whose contributions to the cell fitness are equal. Model cells evolve through genotypic mutations and phenotypic protein copy number variations. We found a strong relationship between evolved physical-chemical properties of protein interactions and their abundances due to a "frustration" effect: strengthening of functional interactions brings about hydrophobic interfaces , which make proteins prone to promiscuous binding. The balancing act is achieved by lowering concentrations of hub proteins while raising solubilities and abundances of functional monomers. Based on these principles we generated and analyzed a possible realization of the proteome-wide PPI network in yeast. In this simulation we found that high-throughput affinity capture - mass spectroscopy experiments can detect functional interactions with high fidelity only for high abundance proteins while missing most interactions for low abundance proteins .
[ { "type": "D", "before": "between their proteins", "after": null, "start_char_pos": 139, "end_char_pos": 161 }, { "type": "A", "before": null, "after": "prototypical", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "R", "before": "PPI subnetworks", "after": "Protein-Protein Interaction (PPI) sub-networks", "start_char_pos": 481, "end_char_pos": 496 }, { "type": "R", "before": "functional concentrations equally contribute to cell's fitness . The model", "after": "contributions to the cell fitness are equal. Model", "start_char_pos": 522, "end_char_pos": 596 }, { "type": "R", "before": "surfaces", "after": "interfaces", "start_char_pos": 903, "end_char_pos": 911 }, { "type": "R", "before": "The non-monotonic relation between abundances and Protein-Protein Interaction network degrees of yeast proteins validates our predictions. Furthermore, in agreement with our model", "after": "Based on these principles we generated and analyzed a possible realization of the proteome-wide PPI network in yeast. In this simulation", "start_char_pos": 1103, "end_char_pos": 1282 }, { "type": "R", "before": "highly abundant yeast proteins show a positive correlation between their degree and dosage sensitivity with respect to overexpression", "after": "high-throughput affinity capture - mass spectroscopy experiments can detect functional interactions with high fidelity only for high abundance proteins while missing most interactions for low abundance proteins", "start_char_pos": 1297, "end_char_pos": 1430 } ]
[ 0, 163, 407, 586, 684, 963, 1102, 1241 ]
1007.2968
1
This cautious note aims to point at the potential risks for the financial system caused by various increasingly popular volatility derivatives including variance swaps on futures of equity indices. It investigates the pricing of variance swaps under the 3/2 volatility model. Carr with Itkin and Sun have discussed the pricing of variance swaps under this type of model. This paper studies a special case of this model and observes an explosion of prices for squared volatility and variance swaps . It argues that such a price explosion may have deeper economic reasons, which should be taken into account when designing volatility derivativesraire portfolio. The growth optimal portfolio is the num\'{e}raire portfolio and used as num\'{e}raire together with the real world probability measure as pricing measure. This pricing concept provides minimal prices for variance swaps even when an equivalent risk neutral probability measure does not exist} .
This paper investigates the pricing and hedging of variance swaps under a 3/2 volatility model. Explicit pricing and hedging formulas of variance swaps are obtained under the benchmark approach, which only requires the existence of the num\'{eraire portfolio. The growth optimal portfolio is the num\'{e}raire portfolio and used as num\'{e}raire together with the real world probability measure as pricing measure. This pricing concept provides minimal prices for variance swaps even when an equivalent risk neutral probability measure does not exist} .
[ { "type": "R", "before": "cautious note aims to point at the potential risks for the financial system caused by various increasingly popular volatility derivatives including variance swaps on futures of equity indices. It", "after": "paper", "start_char_pos": 5, "end_char_pos": 200 }, { "type": "A", "before": null, "after": "and hedging", "start_char_pos": 226, "end_char_pos": 226 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 251, "end_char_pos": 254 }, { "type": "R", "before": "Carr with Itkin and Sun have discussed the pricing", "after": "Explicit pricing and hedging formulas", "start_char_pos": 277, "end_char_pos": 327 }, { "type": "R", "before": "under this type of model. This paper studies a special case of this model and observes an explosion of prices for squared volatility and variance swaps . It argues that such a price explosion may have deeper economic reasons, which should be taken into account when designing volatility derivatives", "after": "are obtained under the benchmark approach, which only requires the existence of the num\\'{e", "start_char_pos": 346, "end_char_pos": 644 } ]
[ 0, 197, 276, 371, 499, 660, 815 ]
1007.4106
1
Vehicular Ad hoc NETworks (VANETs) have emerged as a platform to support intelligent inter-vehicle communication and improve traffic safety and performance. The road-constrained and high mobility of vehicles, their unbounded power source, and the emergence of roadside wireless infrastructures make VANETs a challenging research topic. A key to the development of protocols for intervehicle communication and services lies in the knowledge of the topological characteristics of the VANET communication graph. This paper explores the dynamics of VANETs in urban environments . Using both real and realistic mobility traces, we study the networking shape of VANETs in urban environments under different transmission and market penetration ranges. Given that a number of RSUs have to be deployed for disseminating information to vehicles in an urban area, we also study their impact on vehicular connectivity. Several latent facts about the VANET graph are revealed and implications for their exploitation in protocol design are examined .
Vehicular Ad hoc NETworks (VANETs) have emerged as a platform to support intelligent inter-vehicle communication and improve traffic safety and performance. The road-constrained , high mobility of vehicles, their unbounded power source, and the emergence of roadside wireless infrastructures make VANETs a challenging research topic. A key to the development of protocols for inter-vehicle communication and services lies in the knowledge of the topological characteristics of the VANET communication graph. This paper explores the dynamics of VANETs in urban environments and investigates the impact of these findings in the design of VANET routing protocols . Using both real and realistic mobility traces, we study the networking shape of VANETs under different transmission and market penetration ranges. Given that a number of RSUs have to be deployed for disseminating information to vehicles in an urban area, we also study their impact on vehicular connectivity. Through extensive simulations we investigate the performance of VANET routing protocols by exploiting the knowledge of VANET graphs analysis .
[ { "type": "R", "before": "and", "after": ",", "start_char_pos": 178, "end_char_pos": 181 }, { "type": "R", "before": "intervehicle", "after": "inter-vehicle", "start_char_pos": 378, "end_char_pos": 390 }, { "type": "A", "before": null, "after": "and investigates the impact of these findings in the design of VANET routing protocols", "start_char_pos": 574, "end_char_pos": 574 }, { "type": "D", "before": "in urban environments", "after": null, "start_char_pos": 664, "end_char_pos": 685 }, { "type": "R", "before": "Several latent facts about the VANET graph are revealed and implications for their exploitation in protocol design are examined", "after": "Through extensive simulations we investigate the performance of VANET routing protocols by exploiting the knowledge of VANET graphs analysis", "start_char_pos": 908, "end_char_pos": 1035 } ]
[ 0, 156, 335, 508, 745, 907 ]
1007.5080
1
We present an analytical model that enables a comparison of multiple design options of Opportunistic Spectrum Orthogonal Frequency Division Multiple Access (OS-OFDMA) . The model considers continuous and non-continuous subchannel allocation algorithms, as well as different ways to bond separate non-continuous frequency bands. Different user priorities and channel dwell times, for the Secondary Users and the Primary Users of the radio spectrum, are studied. Further, the model allows the inclusion of different types of Secondary User traffic. Finally, the model enables the study of multiple two-stage spectrum sensing algorithms. Analysis is based on a discrete time Markov chain model which allows for the computation of network characteristics such as the average throughput. From the analysis we conclude that OS-OFDMA with subchannel notching and channel bonding could provide , under certain network configurations, almost seven times higher throughput than the design without those options enabled .
We present an analytical model that enables throughput evaluation of Opportunistic Spectrum Orthogonal Frequency Division Multiple Access (OS-OFDMA) networks. The core feature of the model, based on a discrete time Markov chain, is the consideration of different channel and subchannel allocation strategies under different Primary and Secondary user types, traffic and priority levels. The analytical model also assesses the impact of different spectrum sensing strategies on the throughput of OS-OFDMA network. The analysis applies to the IEEE 802.22 standard, to evaluate the impact of two-stage spectrum sensing strategy and varying temporal activity of wireless microphones on the IEEE 802.22 throughput. Our study suggests that OS-OFDMA with subchannel notching and channel bonding could provide almost ten times higher throughput compared with the design without those options , when the activity and density of wireless microphones is very high. Furthermore, we confirm that OS-OFDMA implementation without subchannel notching, used in the IEEE 802.22, is able to support real-time and non-real-time quality of service classes, provided that wireless microphones temporal activity is moderate (with approximately one wireless microphone per 3,000 inhabitants with light urban population density and short duty cycles). Finally, two-stage spectrum sensing option improves OS-OFDMA throughput, provided that the length of spectrum sensing at every stage is optimized using our model .
[ { "type": "R", "before": "a comparison of multiple design options of", "after": "throughput evaluation of", "start_char_pos": 44, "end_char_pos": 86 }, { "type": "R", "before": ". The model considers continuous and non-continuous subchannel allocation algorithms, as well as different ways to bond separate non-continuous frequency bands. Different user priorities and channel dwell times, for the Secondary Users and the Primary Users of the radio spectrum, are studied. Further, the model allows the inclusion of different types of Secondary User traffic. Finally, the model enables the study of multiple", "after": "networks. The core feature of the model, based on a discrete time Markov chain, is the consideration of different channel and subchannel allocation strategies under different Primary and Secondary user types, traffic and priority levels. The analytical model also assesses the impact of different spectrum sensing strategies on the throughput of OS-OFDMA network. The analysis applies to the IEEE 802.22 standard, to evaluate the impact of", "start_char_pos": 167, "end_char_pos": 595 }, { "type": "R", "before": "algorithms. Analysis is based on a discrete time Markov chain model which allows for the computation of network characteristics such as the average throughput. From the analysis we conclude", "after": "strategy and varying temporal activity of wireless microphones on the IEEE 802.22 throughput. Our study suggests", "start_char_pos": 623, "end_char_pos": 812 }, { "type": "R", "before": ", under certain network configurations, almost seven", "after": "almost ten", "start_char_pos": 886, "end_char_pos": 938 }, { "type": "R", "before": "than", "after": "compared with", "start_char_pos": 963, "end_char_pos": 967 }, { "type": "R", "before": "enabled", "after": ", when the activity and density of wireless microphones is very high. Furthermore, we confirm that OS-OFDMA implementation without subchannel notching, used in the IEEE 802.22, is able to support real-time and non-real-time quality of service classes, provided that wireless microphones temporal activity is moderate (with approximately one wireless microphone per 3,000 inhabitants with light urban population density and short duty cycles). Finally, two-stage spectrum sensing option improves OS-OFDMA throughput, provided that the length of spectrum sensing at every stage is optimized using our model", "start_char_pos": 1001, "end_char_pos": 1008 } ]
[ 0, 327, 460, 546, 634, 782 ]
1007.5376
1
This paper considers an optimal control of a large company with debt liability under bankrupt probability constraints. The company, which faces constant liability payments and has choices to choose various production/business policies from an available set of control policies with different expected profits and risks, controls the business policy and dividend payout process to maximize the expected present value of the dividends until the time of bankruptcy. However, if the dividend payout barrier is too low to be acceptable, it may result in the company's bankruptcy soon. In order to protect the shareholders' profits, the managements of the company impose a reasonable and normal constraint on their dividend strategy, that is, the bankrupt probability associated with the optimal dividend payout barrier should be smaller than a given risk level within a fixed time horizon. This paper aims at working out the optimal retention ratio, dividend payout level, explicit value function of the insurance company under bankrupt probability constraint by stochastic analysis, PDE methods and variational inequality approach , getting a risk-based capital standard to ensure the capital requirement of can cover the total given risk by numerical analysis , and giving reasonable economic interpretation for the results.
This paper considers an optimal control of a big financial company with debt liability under bankrupt probability constraints. The company, which faces constant liability payments and has choices to choose various production/business policies from an available set of control policies with different expected profits and risks, controls the business policy and dividend payout process to maximize the expected present value of the dividends until the time of bankruptcy. However, if the dividend payout barrier is too low to be acceptable, it may result in the company's bankruptcy soon. In order to protect the shareholders' profits, the managements of the company impose a reasonable and normal constraint on their dividend strategy, that is, the bankrupt probability associated with the optimal dividend payout barrier should be smaller than a given risk level within a fixed time horizon. This paper aims working out the optimal control policy as well as optimal return function for the company under bankrupt probability constraint by stochastic analysis, PDE methods and variational inequality approach . Moreover, we establish a risk-based capital standard to ensure the capital requirement of can cover the total given risk by numerical analysis and give reasonable economic interpretation for the results.
[ { "type": "R", "before": "large", "after": "big financial", "start_char_pos": 45, "end_char_pos": 50 }, { "type": "D", "before": "at", "after": null, "start_char_pos": 901, "end_char_pos": 903 }, { "type": "R", "before": "retention ratio, dividend payout level, explicit value function of the insurance", "after": "control policy as well as optimal return function for the", "start_char_pos": 928, "end_char_pos": 1008 }, { "type": "R", "before": ", getting", "after": ". Moreover, we establish", "start_char_pos": 1127, "end_char_pos": 1136 }, { "type": "R", "before": ", and giving", "after": "and give", "start_char_pos": 1257, "end_char_pos": 1269 } ]
[ 0, 118, 462, 579, 884 ]
1008.0237
1
The protein folding is regarded as a quantum transition between torsion states on polypeptide chain. The deduction of the folding rate formula in our previous studies is reviewed. The rate formula is generalized to the case of frequency variation in folding. Then the following problems about the application of the rate theory are discussed: 1) The unified theory on the two-state and multi-state protein folding is given based on the concept of quantum transition. 2) The relationship of folding and unfolding rates vs denaturant concentration is studied. 3) The temperature dependence of folding rate is deduced and the non-Arrhenius behaviors of temperature dependence are interpreted in a natural way. 4) The inertial moment dependence of folding rate is calculated based on the model of dynamical contact order and the consistent results are obtained by comparison with 80-protein experimental dataset. 5) The exergonic and endergonic folding are distinguished through the comparison between theoretical and experimental rates for each protein and the ultrafast folding problem is viewed from the point of quantum folding theory . And finally, 6) since only the torsion-accessible states are manageable in the present formulation of quantum transition how the set of torsion-accessible states can be expanded by using statistical energy landscape approach is discussed .
The protein folding is regarded as a quantum transition between torsion states on polypeptide chain. The deduction of the folding rate formula in our previous studies is reviewed. The rate formula is generalized to the case of frequency variation in folding. Then the following problems about the application of the rate theory are discussed: 1) The unified theory on the two-state and multi-state protein folding is given based on the concept of quantum transition. 2) The relationship of folding and unfolding rates vs denaturant concentration is studied. 3) The temperature dependence of folding rate is deduced and the non-Arrhenius behaviors of temperature dependence are interpreted in a natural way. 4) The inertial moment dependence of folding rate is calculated based on the model of dynamical contact order and consistent results are obtained by comparison with one-hundred-protein experimental dataset. 5) The exergonic and endergonic foldings are distinguished through the comparison between theoretical and experimental rates for each protein . The ultrafast folding problem is viewed from the point of quantum folding theory and a new folding speed limit is deduced from quantum uncertainty relation . And finally, 6) since only the torsion-accessible states are manageable in the present formulation of quantum transition how the set of torsion-accessible states can be expanded by using statistical energy landscape approach is discussed . All above discussions support the view that the protein folding is essentially a quantum transition between conformational states .
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 821, "end_char_pos": 824 }, { "type": "R", "before": "80-protein", "after": "one-hundred-protein", "start_char_pos": 876, "end_char_pos": 886 }, { "type": "R", "before": "folding", "after": "foldings", "start_char_pos": 941, "end_char_pos": 948 }, { "type": "R", "before": "and the", "after": ". The", "start_char_pos": 1050, "end_char_pos": 1057 }, { "type": "A", "before": null, "after": "and a new folding speed limit is deduced from quantum uncertainty relation", "start_char_pos": 1135, "end_char_pos": 1135 }, { "type": "A", "before": null, "after": ". All above discussions support the view that the protein folding is essentially a quantum transition between conformational states", "start_char_pos": 1376, "end_char_pos": 1376 } ]
[ 0, 100, 179, 258, 466, 557, 706, 908, 1137 ]
1008.0298
1
Ribosome is a molecular machine that polymerizes a protein where the sequence of the amino acid subunits of the protein is dictated by the sequence of codons (triplets of nucleotide subunits) on a messenger RNA (mRNA) that serves as the template. The ribosome is a molecular motor that utilizes the template mRNA strand also as the track. Thus, in each step the ribosome moves forward by one codon and, simultaneously, elongates the protein by one amino acid. We present a theoretical model that captures most of the main steps in the mechano-chemical cycle of a ribosome. The stochastic movement of the ribosome consists of an alternating sequence of pause and translocation; the sum of the durations of a pause and the following translocation is defined as the time of dwell of the ribosome at the corresponding codon. We present an analytical calculation of the distribution of the dwell times of a ribosome in our model. Our theoretical prediction is consistent with the experimental resultsreported in the literature .
Ribosome is a molecular machine that polymerizes a protein where the sequence of the amino acid subunits of the protein is dictated by the sequence of codons (triplets of nucleotide subunits) on a messenger RNA (mRNA) that serves as the template. The ribosome is a molecular motor that utilizes the template mRNA strand also as the track. Thus, in each step the ribosome moves forward by one codon and, simultaneously, elongates the protein by one amino acid. We present a theoretical model that captures most of the main steps in the mechano-chemical cycle of a ribosome. The stochastic movement of the ribosome consists of an alternating sequence of pause and translocation; the sum of the durations of a pause and the following translocation is the time of dwell of the ribosome at the corresponding codon. We derive the analytical expression for the distribution of the dwell times of a ribosome in our model. Whereever experimental data are available, our theoretical predictions are consistent with those results. We suggest appropriate experiments to test the new predictions of our model, particularly, the effects of the quality control mechanism of the ribosome and that of their crowding on the mRNA track .
[ { "type": "D", "before": "defined as", "after": null, "start_char_pos": 748, "end_char_pos": 758 }, { "type": "R", "before": "present an analytical calculation of", "after": "derive the analytical expression for", "start_char_pos": 824, "end_char_pos": 860 }, { "type": "R", "before": "Our theoretical prediction is consistent with the experimental resultsreported in the literature", "after": "Whereever experimental data are available, our theoretical predictions are consistent with those results. We suggest appropriate experiments to test the new predictions of our model, particularly, the effects of the quality control mechanism of the ribosome and that of their crowding on the mRNA track", "start_char_pos": 925, "end_char_pos": 1021 } ]
[ 0, 246, 338, 459, 572, 676, 820, 924 ]
1008.0298
2
Ribosome is a molecular machine that polymerizes a protein where the sequence of the amino acid subunits of the protein is dictated by the sequence of codons (triplets of nucleotide subunits ) on a messenger RNA (mRNA) that serves as the template. The ribosome is a molecular motor that utilizes the template mRNA strand also as the track. Thus, in each step the ribosome moves forward by one codon and, simultaneously, elongates the protein by one amino acid. We present a theoretical model that captures most of the main steps in the mechano-chemical cycle of a ribosome. The stochastic movement of the ribosome consists of an alternating sequence of pause and translocation; the sum of the durations of a pause and the following translocation is the time of dwell of the ribosome at the corresponding codon. We derive the analytical expression for the distribution of the dwell times of a ribosome in our model. Whereever experimental data are available, our theoretical predictions are consistent with those results. We suggest appropriate experiments to test the new predictions of our model, particularly, the effects of the quality control mechanism of the ribosome and that of their crowding on the mRNA track.
Ribosome is a molecular machine that polymerizes a protein where the sequence of the amino acid residues, the monomers of the protein , is dictated by the sequence of codons (triplets of nucleotides ) on a messenger RNA (mRNA) that serves as the template. The ribosome is a molecular motor that utilizes the template mRNA strand also as the track. Thus, in each step the ribosome moves forward by one codon and, simultaneously, elongates the protein by one amino acid. We present a theoretical model that captures most of the main steps in the mechano-chemical cycle of a ribosome. The stochastic movement of the ribosome consists of an alternating sequence of pause and translocation; the sum of the durations of a pause and the following translocation is the time of dwell of the ribosome at the corresponding codon. We derive the analytical expression for the distribution of the dwell times of a ribosome in our model. Whereever experimental data are available, our theoretical predictions are consistent with those results. We suggest appropriate experiments to test the new predictions of our model, particularly, the effects of the quality control mechanism of the ribosome and that of their crowding on the mRNA track.
[ { "type": "R", "before": "subunits", "after": "residues, the monomers", "start_char_pos": 96, "end_char_pos": 104 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 120, "end_char_pos": 120 }, { "type": "R", "before": "nucleotide subunits", "after": "nucleotides", "start_char_pos": 172, "end_char_pos": 191 } ]
[ 0, 248, 340, 461, 574, 678, 811, 915, 1021 ]
1008.0431
1
Posttranslational modification of proteins is key in transmission of signals in cells. Many signaling pathways contain several layers of modification cycles that mediate and change the signal through the pathway. Here, we study a simple signaling cascade consisting of n layers of modification cycles, such that the modified protein of one layer acts as modifier in the next layer. Assuming mass-action kinetics and taking the formation of intermediate complexes into account, we show that the steady states are solutions to a polynomial in one variable, and in fact that there is exactly one steady state for any given total amounts of substrates and enzymes. We demonstrate that many steady state concentrations are related through explicit rational functions, which can be found recursively and computed efficiently using programs like Mathematica . For example, stimulus-response curves arise as inverse functions to explicit rational functions. We show that increasing the cascade length shifts the stimulus-response curves to the left . Further, our approach allows us to study enzyme competition, sequestration and how the steady state changes in response to changes in the total amount of substrates. Our approach is essentially algebraic and follows recent trends in the study of posttranslational modification systems.
Posttranslational modification of proteins is key in transmission of signals in cells. Many signaling pathways contain several layers of modification cycles that mediate and change the signal through the pathway. Here, we study a simple signaling cascade consisting of n layers of modification cycles, such that the modified protein of one layer acts as modifier in the next layer. Assuming mass-action kinetics and taking the formation of intermediate complexes into account, we show that the steady states are solutions to a polynomial in one variable, and in fact that there is exactly one steady state for any given total amounts of substrates and enzymes. We demonstrate that many steady state concentrations are related through rational functions, which can be found recursively . For example, stimulus-response curves arise as inverse functions to explicit rational functions. We show that the stimulus-response curves of the modified substrates are shifted to the left as we move down the cascade . Further, our approach allows us to study enzyme competition, sequestration and how the steady state changes in response to changes in the total amount of substrates. Our approach is essentially algebraic and follows recent trends in the study of posttranslational modification systems.
[ { "type": "D", "before": "explicit", "after": null, "start_char_pos": 734, "end_char_pos": 742 }, { "type": "D", "before": "and computed efficiently using programs like Mathematica", "after": null, "start_char_pos": 794, "end_char_pos": 850 }, { "type": "R", "before": "increasing the cascade length shifts the", "after": "the", "start_char_pos": 963, "end_char_pos": 1003 }, { "type": "A", "before": null, "after": "of the modified substrates are shifted", "start_char_pos": 1029, "end_char_pos": 1029 }, { "type": "A", "before": null, "after": "as we move down the cascade", "start_char_pos": 1042, "end_char_pos": 1042 } ]
[ 0, 86, 212, 381, 660, 852, 949, 1044, 1210 ]
1008.1628
1
A Markovian model is proposed in this paper to study the performance of 1-persistent CSMA / CA protocols, from which we obtain stable regions with respect to the throughput and bounded delay of Geometric Retransmission and Exponential Backoff scheduling algorithms . Our results show that the throughput of Geometric Retransmission is unstable for large n while the throughput of Exponential Backoff still exists for n -> infinity . Moreover, the bounded delay region of Geometric Retransmission is the same as its stable throughput region; but that of Exponential Backoff is a sub-set of its stable throughput region . All analytical results are verified by simulation.
A Markovian model of 1-persistent CSMA/CA protocols with K-Exponential Backoff scheduling algorithms is proposed in this paper . The input buffer of each access node is modeled as a Geo / G/1 queue, and the service time distribution of each individual head-of-line packet is derived from the Markov chain of underlying scheduling algorithm, from which we obtain stable regions of the retransmission factor that ensures the throughput stability of the system and bounded mean delay of input packets . Our results show that the throughput of Geometric Retransmission (K = 1) is inherently unstable for large n since its stable throughput region vanishes as the number of nodes n -> infinity; while the throughput stable region of Exponential Backoff still exists even for an infinite population . Moreover, we find that the bounded delay region of Geometric Retransmission is the same as its stable throughput region; but that of Exponential Backoff is only a sub-set of its stable throughput region due to the capture effect, which causes large variances of the service time of input packets . All analytical results presented in this paper are verified by simulation.
[ { "type": "A", "before": null, "after": "of 1-persistent CSMA/CA protocols with K-Exponential Backoff scheduling algorithms", "start_char_pos": 18, "end_char_pos": 18 }, { "type": "R", "before": "to study the performance of 1-persistent CSMA", "after": ". The input buffer of each access node is modeled as a Geo", "start_char_pos": 45, "end_char_pos": 90 }, { "type": "R", "before": "CA protocols, from", "after": "G/1 queue, and the service time distribution of each individual head-of-line packet is derived from the Markov chain of underlying scheduling algorithm, from", "start_char_pos": 93, "end_char_pos": 111 }, { "type": "R", "before": "with respect to the throughput and bounded delay of Geometric Retransmission and Exponential Backoff scheduling algorithms", "after": "of the retransmission factor that ensures the throughput stability of the system and bounded mean delay of input packets", "start_char_pos": 143, "end_char_pos": 265 }, { "type": "R", "before": "is", "after": "(K = 1) is inherently", "start_char_pos": 333, "end_char_pos": 335 }, { "type": "A", "before": null, "after": "since its stable throughput region vanishes as the number of nodes n -> infinity;", "start_char_pos": 357, "end_char_pos": 357 }, { "type": "A", "before": null, "after": "stable region", "start_char_pos": 379, "end_char_pos": 379 }, { "type": "R", "before": "for n -> infinity", "after": "even for an infinite population", "start_char_pos": 416, "end_char_pos": 433 }, { "type": "A", "before": null, "after": "we find that", "start_char_pos": 446, "end_char_pos": 446 }, { "type": "A", "before": null, "after": "only", "start_char_pos": 580, "end_char_pos": 580 }, { "type": "A", "before": null, "after": "due to the capture effect, which causes large variances of the service time of input packets", "start_char_pos": 623, "end_char_pos": 623 }, { "type": "A", "before": null, "after": "presented in this paper", "start_char_pos": 649, "end_char_pos": 649 } ]
[ 0, 267, 435, 544, 625 ]
1008.1628
2
A Markovian model of 1-persistent CSMA/CA protocols with K-Exponential Backoff scheduling algorithms is proposed in this paper . The input buffer of each access node is modeled as a Geo/G/1 queue, and the service time distribution of each individual head-of-line packet is derived from the Markov chain of underlying scheduling algorithm , from which we obtain stable regions of the retransmission factor that ensures the throughput stability of the system and bounded mean delay of input packets . Our results show that the throughput of Geometric Retransmission (K = 1) is inherently unstable for large n since its stable throughput region vanishes as the number of nodes n -> infinity; while the throughput stable region of Exponential Backoff still exists even for an infinite population. Moreover, we find that the bounded delay region of Geometric Retransmission is the same as its stable throughput region; but that of Exponential Backoff is only a sub-set of its stable throughput region due to the capture effect, which causes large variances of the service time of input packets . All analytical results presented in this paper are verified by simulation .
This paper proposes a Markovian model of 1-persistent CSMA/CA protocols with K-Exponential Backoff scheduling algorithms . The input buffer of each access node is modeled as a Geo/G/1 queue, and the service time distribution of each individual head-of-line packet is derived from the Markov chain of the underlying scheduling algorithm . From the queuing model, we derive the characteristic equation of network throughput and obtain the stable throughput and bounded delay regions with respect to the retransmission factor . Our results show that the stable throughput region of the exponential backoff scheme exists even for an infinite population. Moreover, we find that the bounded delay region of exponential backoff is only a sub-set of its stable throughput region due to the large variance of the service time of input packets caused by the capture effect . All analytical results presented in this paper are verified by simulations .
[ { "type": "R", "before": "A", "after": "This paper proposes a", "start_char_pos": 0, "end_char_pos": 1 }, { "type": "D", "before": "is proposed in this paper", "after": null, "start_char_pos": 101, "end_char_pos": 126 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 306, "end_char_pos": 306 }, { "type": "R", "before": ", from which we obtain stable regions of the retransmission factor that ensures the throughput stability of the system and bounded mean delay of input packets", "after": ". From the queuing model, we derive the characteristic equation of network throughput and obtain the stable throughput and bounded delay regions with respect to the retransmission factor", "start_char_pos": 339, "end_char_pos": 497 }, { "type": "D", "before": "throughput of Geometric Retransmission (K = 1) is inherently unstable for large n since its", "after": null, "start_char_pos": 526, "end_char_pos": 617 }, { "type": "R", "before": "vanishes as the number of nodes n -> infinity; while the throughput stable region of Exponential Backoff still", "after": "of the exponential backoff scheme", "start_char_pos": 643, "end_char_pos": 753 }, { "type": "R", "before": "Geometric Retransmission is the same as its stable throughput region; but that of Exponential Backoff is", "after": "exponential backoff is", "start_char_pos": 845, "end_char_pos": 949 }, { "type": "R", "before": "capture effect, which causes large variances", "after": "large variance", "start_char_pos": 1008, "end_char_pos": 1052 }, { "type": "A", "before": null, "after": "caused by the capture effect", "start_char_pos": 1090, "end_char_pos": 1090 }, { "type": "R", "before": "simulation", "after": "simulations", "start_char_pos": 1156, "end_char_pos": 1166 } ]
[ 0, 128, 499, 689, 793, 914, 1092 ]
1008.3276
1
Motivated by applications to bond markets, we propose a multivariate framework for discrete time financial markets with proportional transaction costs and a countable infinite number of tradable assets. We show that the no-arbitrage of second kind property (NA2 in short), introduced by \mbox{%DIFAUXCMD ras09 markets, allows to provide a closure property for the set of attainable claims in a very natural way, under a suitable efficient friction condition. We also extend to this context the equivalence between NA2 and the existence of multiple (strictly) consistent price systems.
Motivated by applications to bond markets, we propose a multivariate framework for discrete time financial markets with proportional transaction costs and a countable infinite number of tradable assets. We show that the no-arbitrage of second kind property (NA2 in short), recently introduced by Rasonyi for finite-dimensional markets, allows us to provide a closure property for the set of attainable claims in a very natural way, under a suitable efficient friction condition. We also extend to this context the equivalence between NA2 and the existence of many (strictly) consistent price systems.
[ { "type": "R", "before": "introduced by \\mbox{%DIFAUXCMD ras09", "after": "recently introduced by Rasonyi for finite-dimensional", "start_char_pos": 273, "end_char_pos": 309 }, { "type": "A", "before": null, "after": "us", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "R", "before": "multiple", "after": "many", "start_char_pos": 540, "end_char_pos": 548 } ]
[ 0, 202, 459 ]
1008.3650
1
We study the timing of derivative purchases in incomplete markets. In our model, an investor attempts to maximize the spread between her model price and the offered market price through optimally timing her purchase. Both the investor and the market value the options by risk-neutral expectations but under different equivalent martingale measures representing different market views. We show that the structure of the resulting optimal stopping problem depends on the interaction between the respective market price of risk and the option payoff. In particular, a crucial role is played by the delayed purchase premium that is related to the stochastic bracket between the market price and the buyer's risk premia. Explicit characterization of the purchase timing is given for two representative classes of Markovian models: (i) defaultable equity models with local intensity; (ii) diffusion stochastic volatility models. Several numerical examples are presented to illustrate the results. Our model is also applicable in the related contexts of hedging long-dated options and quasi-static hedging .
We study the optimal timing of derivative purchases in incomplete markets. In our model, an investor attempts to maximize the spread between her model price and the offered market price through optimally timing her purchase. Both the investor and the market value the options by risk-neutral expectations but under different equivalent martingale measures representing different market views. The structure of the resulting optimal stopping problem depends on the interaction between the respective market price of risk and the option payoff. In particular, a crucial role is played by the delayed purchase premium that is related to the stochastic bracket between the market price and the buyer's risk premia. Explicit characterization of the purchase timing is given for two representative classes of Markovian models: (i) defaultable equity models with local intensity; (ii) diffusion stochastic volatility models. Several numerical examples are presented to illustrate the results. Our model is also applicable to the optimal rolling of long-dated options and sequential buying and selling of options .
[ { "type": "A", "before": null, "after": "optimal", "start_char_pos": 13, "end_char_pos": 13 }, { "type": "R", "before": "We show that the", "after": "The", "start_char_pos": 386, "end_char_pos": 402 }, { "type": "R", "before": "in the related contexts of hedging", "after": "to the optimal rolling of", "start_char_pos": 1021, "end_char_pos": 1055 }, { "type": "R", "before": "quasi-static hedging", "after": "sequential buying and selling of options", "start_char_pos": 1079, "end_char_pos": 1099 } ]
[ 0, 67, 217, 385, 548, 716, 878, 923, 991 ]
1008.3722
1
In this paper we consider a new class of dynamic pricing principles and recursive utilities. We start with the interpretation of the generator of a backward stochastic differential equation as an infinitesimal pricing rule or an instantaneous utility. With this interpretation the generator has an economic meaning and describes the subjective views of the investor concerning the expected change in the price or the utility. We give a motivation for considering non-Markovian generators of BSDEs which leads us to{t}\int_0^tY(s)ds, 1{t}\int_0^tZ(s)ds). This seems to be a natural step forward in } the study of so-called time-delayed backward stochastic differential equations. We investigate two pricing principles and recursive utilities which are derived from time-delayed BSDEs with generators of a moving average type . They might be useful in the case of an individual valuation of a pay-off. A non-Markovian generator arises when the local valuation rule of the investor depends on the past values of prices, volatilities or utilities. Some properties of our new pricing principles and recursive utilities are considered and we show that they are fundamentally different from the properties which hold for prices and utilities based on classical BSDEs. An interpretation of this fact is provided .
In this paper we consider backward stochastic differential equations with time-delayed generators of a moving average type. The classical and well-known framework with linear generators depending on (Y(t),Z(t)) is extended and we investigate linear generators depending on (\frac{1{t}\int_0^tY(s)ds, 1{t}\int_0^tZ(s)ds). This seems to be a natural step forward in } the study of BSDEs. We derive explicit solutions to the corresponding time-delayed BSDEs and we investigate in detail some properties of the solutions. In particular, these solutions are linear, time-consistent, continuous, non-monotonic and not conditionally invariant with respect to a terminal condition. A motivation for dealing with generators of a moving average type in the context of dynamic pricing and recursive utilities is given. We propose a new direction in modelling of investors' preferences and behaviors .
[ { "type": "D", "before": "a new class of dynamic pricing principles and recursive utilities. We start with the interpretation of the generator of a", "after": null, "start_char_pos": 26, "end_char_pos": 147 }, { "type": "R", "before": "equation as an infinitesimal pricing rule or an instantaneous utility. With this interpretation the generator has an economic meaning and describes the subjective views of the investor concerning the expected change in the price or the utility. We give a motivation for considering non-Markovian generators of BSDEs which leads us to", "after": "equations with time-delayed generators of a moving average type. The classical and well-known framework with linear generators depending on (Y(t),Z(t)) is extended and we investigate linear generators depending on (\\frac{1", "start_char_pos": 181, "end_char_pos": 514 }, { "type": "R", "before": "so-called", "after": "BSDEs. We derive explicit solutions to the corresponding", "start_char_pos": 612, "end_char_pos": 621 }, { "type": "D", "before": "backward stochastic differential equations. We investigate two pricing principles and recursive utilities which are derived from time-delayed", "after": null, "start_char_pos": 635, "end_char_pos": 776 }, { "type": "R", "before": "with", "after": "and we investigate in detail some properties of the solutions. In particular, these solutions are linear, time-consistent, continuous, non-monotonic and not conditionally invariant with respect to a terminal condition. A motivation for dealing with", "start_char_pos": 783, "end_char_pos": 787 }, { "type": "R", "before": ". They might be useful in the case of an individual valuation of a pay-off. A non-Markovian generator arises when the local valuation rule of the investor depends on the past values of prices, volatilities or utilities. Some properties of our new pricing principles", "after": "in the context of dynamic pricing", "start_char_pos": 824, "end_char_pos": 1089 }, { "type": "R", "before": "are considered and we show that they are fundamentally different from the properties which hold for prices and utilities based on classical BSDEs. An interpretation of this fact is provided", "after": "is given. We propose a new direction in modelling of investors' preferences and behaviors", "start_char_pos": 1114, "end_char_pos": 1303 } ]
[ 0, 92, 251, 425, 553, 678, 825, 899, 1043, 1260 ]
1008.3722
2
In this paper we consider backward stochastic differential equations with time-delayed generators of a moving average type. The classical and well-known framework with linear generators depending on (Y(t),Z(t)) is extended and we investigate linear generators depending on (1{t}\int_0^tY(s)ds, 1{t}\int_0^tZ(s)ds). This seems to be a natural step forward in the study of BSDEs. We derive explicit solutions to the corresponding time-delayed BSDEs and we investigate in detail some properties of the solutions. In particular, these solutions are linear, time-consistent, continuous, non-monotonic and not conditionally invariant with respect to a terminal condition. A motivation for dealing with generators of a moving average type in the context of dynamic pricing and recursive utilities is given. We propose a new direction in modelling of investors' preferencesand behaviors .
In this paper we consider backward stochastic differential equations with time-delayed generators of a moving average type. The classical framework with linear generators depending on (Y(t),Z(t)) is extended and we investigate linear generators depending on (1{t}\int_0^tY(s)ds, 1{t}\int_0^tZ(s)ds). We derive explicit solutions to the corresponding time-delayed BSDEs and we investigate in detail main properties of the solutions. An economic motivation for dealing with the BSDEs with the time-delayed generators of the moving average type is given. We argue that such equations may arise when we face the problem of dynamic modelling of non-monotone preferences. We model a disappointment effect under which the present pay-off is compared with the past expectations and a volatility aversion which causes the present pay-off to be penalized by the past exposures to the volatility risk .
[ { "type": "D", "before": "and well-known", "after": null, "start_char_pos": 138, "end_char_pos": 152 }, { "type": "D", "before": "This seems to be a natural step forward in the study of BSDEs.", "after": null, "start_char_pos": 315, "end_char_pos": 377 }, { "type": "R", "before": "some", "after": "main", "start_char_pos": 476, "end_char_pos": 480 }, { "type": "R", "before": "In particular, these solutions are linear, time-consistent, continuous, non-monotonic and not conditionally invariant with respect to a terminal condition. A", "after": "An economic", "start_char_pos": 510, "end_char_pos": 667 }, { "type": "R", "before": "generators of a", "after": "the BSDEs with the time-delayed generators of the", "start_char_pos": 696, "end_char_pos": 711 }, { "type": "D", "before": "in the context of dynamic pricing and recursive utilities", "after": null, "start_char_pos": 732, "end_char_pos": 789 }, { "type": "R", "before": "propose a new direction in modelling of investors' preferencesand behaviors", "after": "argue that such equations may arise when we face the problem of dynamic modelling of non-monotone preferences. We model a disappointment effect under which the present pay-off is compared with the past expectations and a volatility aversion which causes the present pay-off to be penalized by the past exposures to the volatility risk", "start_char_pos": 803, "end_char_pos": 878 } ]
[ 0, 123, 314, 377, 509, 665, 799 ]
1008.4006
1
Understanding protein structure is of crucial importance in science, medicine and biotechnology. For about two decades, knowledge based potentials based on pairwise distances -- so-called `` potentials of mean force '' (PMFs) -- have been at the central stage in the prediction and design of protein structure and the simulation of protein folding. However, the validity, scope and limitations of these potentials are still vigorously debated , and the optimal choice of the reference state -- a necessary component of these potentials -- is an unsolved problem. PMFs are loosely justified by analogy to the reversible work theorem in statistical physics, or by a statistical argument based on a likelihood function. Both justifications are insightful but leave many questions unanswered. Here, we show that PMFs have a rigorous probabilistic justification: they naturally arise when probability distributions over different features of proteins need to be combined. This justification is not only of theoretical relevance, but leads to many insights that are of direct practical use: the reference state is uniquely defined by the probability distributions involved, PMFs can be generalized beyond pairwise interactions to arbitrary features of protein structure and it becomes clear for which purposes the use of these potentials is justified. We illustrate these insights with two applications, respectively involving the radius of gyration and hydrogen bonding. In the latter case, we also show how a PMF can be iteratively refined to sculpt an energy funnel. Our results considerably increase the understanding and scope of energy functions derived from known biomolecular structures.
Understanding protein structure is of crucial importance in science, medicine and biotechnology. For about two decades, knowledge based potentials based on pairwise distances -- so-called " potentials of mean force " (PMFs) -- have been center stage in the prediction and design of protein structure and the simulation of protein folding. However, the validity, scope and limitations of these potentials are still vigorously debated and disputed , and the optimal choice of the reference state -- a necessary component of these potentials -- is an unsolved problem. PMFs are loosely justified by analogy to the reversible work theorem in statistical physics, or by a statistical argument based on a likelihood function. Both justifications are insightful but leave many questions unanswered. Here, we show for the first time that PMFs can be seen as approximations to quantities that do have a rigorous probabilistic justification: they naturally arise when probability distributions over different features of proteins need to be combined. We call these quantities reference ratio distributions deriving from the application of the reference ratio method. This new view is not only of theoretical relevance, but leads to many insights that are of direct practical use: the reference state is uniquely defined and does not require external physical insights; the approach can be generalized beyond pairwise distances to arbitrary features of protein structure ; and it becomes clear for which purposes the use of these quantities is justified. We illustrate these insights with two applications, involving the radius of gyration and hydrogen bonding. In the latter case, we also show how the reference ratio method can be iteratively applied to sculpt an energy funnel. Our results considerably increase the understanding and scope of energy functions derived from known biomolecular structures.
[ { "type": "R", "before": "``", "after": "\"", "start_char_pos": 188, "end_char_pos": 190 }, { "type": "R", "before": "''", "after": "\"", "start_char_pos": 216, "end_char_pos": 218 }, { "type": "R", "before": "at the central", "after": "center", "start_char_pos": 239, "end_char_pos": 253 }, { "type": "A", "before": null, "after": "and disputed", "start_char_pos": 443, "end_char_pos": 443 }, { "type": "R", "before": "that PMFs", "after": "for the first time that PMFs can be seen as approximations to quantities that do", "start_char_pos": 804, "end_char_pos": 813 }, { "type": "R", "before": "This justification", "after": "We call these quantities reference ratio distributions deriving from the application of the reference ratio method. This new view", "start_char_pos": 968, "end_char_pos": 986 }, { "type": "R", "before": "by the probability distributions involved, PMFs", "after": "and does not require external physical insights; the approach", "start_char_pos": 1126, "end_char_pos": 1173 }, { "type": "R", "before": "interactions", "after": "distances", "start_char_pos": 1209, "end_char_pos": 1221 }, { "type": "A", "before": null, "after": ";", "start_char_pos": 1265, "end_char_pos": 1265 }, { "type": "R", "before": "potentials", "after": "quantities", "start_char_pos": 1323, "end_char_pos": 1333 }, { "type": "D", "before": "respectively", "after": null, "start_char_pos": 1400, "end_char_pos": 1412 }, { "type": "R", "before": "a PMF", "after": "the reference ratio method", "start_char_pos": 1505, "end_char_pos": 1510 }, { "type": "R", "before": "refined", "after": "applied", "start_char_pos": 1530, "end_char_pos": 1537 } ]
[ 0, 96, 348, 563, 717, 789, 967, 1347, 1467, 1565 ]
1008.4597
1
Quantum effects are mainly used for the determination of molecular shapes in molecular biology, but quantum information theory may be a more useful tool to understand the physics of life. Molecular biology assumes that function is explained by structure, the complementary geometries of molecules and weak intermolecular hydrogen bonds. However, both this assumption and its converse are possible URLanic molecules and quantum circuits/protocols are considered as hardware and software of living systems that are co-optimized during evolution. In this paper, we try to model DNA replication as a multiparticle entanglement swapping with a reliable qubit representation of nucleotides. In the model , molecular recognition of a nucleotide triggers an intrabase entanglement corresponding to a superposition state of different tautomer forms. Then, base pairing occurs by swapping intrabase entanglements with interbase entanglements .
Quantum effects are mainly used for the determination of molecular shapes in molecular biology, but quantum information theory may be a more useful tool to understand the physics of life. Organic molecules and quantum circuits/protocols can be considered as hardware and software of living systems that are co-optimized during evolution. We try to model DNA replication in this sense as a multi-body entanglement swapping with a reliable qubit representation of the nucleotides. In our model molecular recognition of a nucleotide triggers an intrabase entanglement corresponding to a superposition state of different tautomer forms. Then, base pairing occurs by swapping intrabase entanglements with interbase entanglements . We examine possible realizations of quantum circuits to be used to obtain intrabase entanglement and swapping protocols to be employed to obtain interbase entanglement. Finally, we discuss possible ways for computational and experimental verification of the model .
[ { "type": "R", "before": "Molecular biology assumes that function is explained by structure, the complementary geometries of molecules and weak intermolecular hydrogen bonds. However, both this assumption and its converse are possible URLanic molecules and", "after": "Organic molecules and", "start_char_pos": 188, "end_char_pos": 418 }, { "type": "R", "before": "are", "after": "can be", "start_char_pos": 446, "end_char_pos": 449 }, { "type": "R", "before": "In this paper, we", "after": "We", "start_char_pos": 544, "end_char_pos": 561 }, { "type": "R", "before": "as a multiparticle", "after": "in this sense as a multi-body", "start_char_pos": 591, "end_char_pos": 609 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 672, "end_char_pos": 672 }, { "type": "R", "before": "the model ,", "after": "our model", "start_char_pos": 689, "end_char_pos": 700 }, { "type": "A", "before": null, "after": ". We examine possible realizations of quantum circuits to be used to obtain intrabase entanglement and swapping protocols to be employed to obtain interbase entanglement. Finally, we discuss possible ways for computational and experimental verification of the model", "start_char_pos": 933, "end_char_pos": 933 } ]
[ 0, 187, 336, 543, 685, 841 ]
1008.4841
1
In this paper we analytically study the pricing of the arithmetically averaged Asian option in the path integral formalism. By a trick about the Dirac delta function, the measure of the path integral is defined by an effective action whose potential term is an exponential function , i. e. the Liouville Hamiltonian, which can be explicitly solved . After working out some auxiliary integrations involving Bessel and Whittaker functions, we arrive at the spectral expansion expression of the value of an Asian option .
In this paper we analytically study the problem of pricing an arithmetically averaged Asian option in the path integral formalism. By a trick about the Dirac delta function, the measure of the path integral is defined by an effective action functional whose potential term is an exponential function . This path integral is evaluated by use of the Feynman-Kac theorem. After working out some auxiliary integrations involving Bessel and Whittaker functions, we arrive at the spectral expansion for the value of Asian options .
[ { "type": "R", "before": "pricing of the", "after": "problem of pricing an", "start_char_pos": 40, "end_char_pos": 54 }, { "type": "A", "before": null, "after": "functional", "start_char_pos": 234, "end_char_pos": 234 }, { "type": "D", "before": ", i. e. the Liouville Hamiltonian, which can be explicitly solved", "after": null, "start_char_pos": 283, "end_char_pos": 348 }, { "type": "A", "before": null, "after": "This path integral is evaluated by use of the Feynman-Kac theorem.", "start_char_pos": 351, "end_char_pos": 351 }, { "type": "R", "before": "expression of", "after": "for", "start_char_pos": 476, "end_char_pos": 489 }, { "type": "R", "before": "an Asian option", "after": "Asian options", "start_char_pos": 503, "end_char_pos": 518 } ]
[ 0, 123 ]
1008.5359
1
We extend the groupoid formalism of coupled cell networks as developed by Golubitsky, Stewart and their collaborators by recasting it in a categorical language . In particular we develop a combinatorial models for the categories of modular continuous time dynamical systems on manifolds .
We develop a new framework for the study of complex continuous time dynamical systems based on viewing them as collections of interacting control modules. This framework is inspired by and builds upon the groupoid formalism of Golubitsky, Stewart and their collaborators . Our approach uses the tools and --- more importantly ---the stance of category theory. This enables us to put the groupoid formalism in a coordinate-free setting and to extend it from ordinary differential equations to vector fields on manifolds . In particular , we construct combinatorial models for categories of modular continuous time dynamical systems . Each such model, as a category, is a fibration over an appropriate category of labeled directed graphs. This makes precise the relation between dynamical systems living on networks and the combinatorial structure of the underlying directed graphs, allowing us to exploit the relation in new and interesting ways .
[ { "type": "R", "before": "extend the groupoid formalism of coupled cell networks as developed by", "after": "develop a new framework for the study of complex continuous time dynamical systems based on viewing them as collections of interacting control modules. This framework is inspired by and builds upon the groupoid formalism of", "start_char_pos": 3, "end_char_pos": 73 }, { "type": "R", "before": "by recasting it in a categorical language", "after": ". Our approach uses the tools and --- more importantly ---the stance of category theory. This enables us to put the groupoid formalism in a coordinate-free setting and to extend it from ordinary differential equations to vector fields on manifolds", "start_char_pos": 118, "end_char_pos": 159 }, { "type": "R", "before": "we develop a", "after": ", we construct", "start_char_pos": 176, "end_char_pos": 188 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 214, "end_char_pos": 217 }, { "type": "R", "before": "on manifolds", "after": ". Each such model, as a category, is a fibration over an appropriate category of labeled directed graphs. This makes precise the relation between dynamical systems living on networks and the combinatorial structure of the underlying directed graphs, allowing us to exploit the relation in new and interesting ways", "start_char_pos": 274, "end_char_pos": 286 } ]
[ 0, 161 ]
1008.5373
1
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem . Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed .
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first establish that a class of special rank minimization problems has closed-form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by the penalty decomposition methods satisfies the first-order optimality conditions of a nonlinear reformulation of the problems . Finally, we test the performance of our methods by applying them to the matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods are generally comparable or superior to the existing methods in terms of solution quality .
[ { "type": "D", "before": "show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we", "after": null, "start_char_pos": 134, "end_char_pos": 269 }, { "type": "A", "before": null, "after": "special", "start_char_pos": 296, "end_char_pos": 296 }, { "type": "R", "before": "have closed form", "after": "has closed-form", "start_char_pos": 324, "end_char_pos": 340 }, { "type": "R", "before": "our method when applied to the rank constrained minimization problem is a stationary point", "after": "the penalty decomposition methods satisfies the first-order optimality conditions", "start_char_pos": 627, "end_char_pos": 717 }, { "type": "R", "before": "problem", "after": "problems", "start_char_pos": 754, "end_char_pos": 761 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 832, "end_char_pos": 832 }, { "type": "R", "before": "generally outperform", "after": "are generally comparable or superior to", "start_char_pos": 956, "end_char_pos": 976 }, { "type": "D", "before": "and/or speed", "after": null, "start_char_pos": 1027, "end_char_pos": 1039 } ]
[ 0, 124, 248, 351, 528, 763, 900 ]
1009.0870
1
In this paper, we propose a stochastic model to describe how modern search service providers charge client companies based on users' queries for their related "adwords" by using certain advertisement assignment strategies. We formulate an optimization problem to maximize the long-term average revenue for the service provider under each client's long-term average budget constraint, and design an online algorithm which captures the stochastic properties of users' queries and click-through behaviors. We solve the optimization problem by making connections to scheduling problems in wireless networks, queueing theory and stochastic networks. With a small customizable parameter \epsilon which is the step size used in each iteration of the online algorithm, we have shown that our online algorithm achieves a long-term average revenue which is within O(\epsilon) of the optimal revenue and the overdraft level of this algorithm is upper-bounded by O(1/\epsilon) {1+\Delta} fraction of the revenue if \Delta is the percentage error in click-through rate estimation. We also show that in the long run, an expected overdraft level of \Omega(\log(1/\epsilon)) is unavoidable (a universal lower bound) under any stationary ad assignment algorithm which achieves a long-term average revenue within O(\epsilon) of the offline optimum} .
In this paper, we propose a stochastic model to describe how search service providers charge client companies based on users' queries for the keywords related to these companies' ads by using certain advertisement assignment strategies. We formulate an optimization problem to maximize the long-term average revenue for the service provider under each client's long-term average budget constraint, and design an online algorithm which captures the stochastic properties of users' queries and click-through behaviors. We solve the optimization problem by making connections to scheduling problems in wireless networks, queueing theory and stochastic networks. Unlike prior models, we do not assume that the number of query arrivals is known. Due to the stochastic nature of the arrival process considered here, either temporary "free" service, i.e., service above the specified budget (which we call "overdraft") or under-utilization of the budget (which we call "underdraft") is unavoidable. We prove that our online algorithm can achieve a revenue that is within O(\epsilon) of the optimal revenue while ensuring that the overdraft or underdraft is O(1/\epsilon) , where \epsilon can be arbitrarily small. With a view towards practice, we also show that one can always operate strictly under the budget. Our algorithm also allows us to quantify the effect of errors in click-through rate estimation on the achieved revenue. We show that we lose at most \frac{\Delta{1+\Delta} fraction of the revenue if \Delta is the percentage error in click-through rate estimation. We also show that in the long run, an expected overdraft level of \Omega(\log(1/\epsilon)) is unavoidable (a universal lower bound) under any stationary ad assignment algorithm which achieves a long-term average revenue within O(\epsilon) of the offline optimum} .
[ { "type": "D", "before": "modern", "after": null, "start_char_pos": 61, "end_char_pos": 67 }, { "type": "R", "before": "their related \"adwords\"", "after": "the keywords related to these companies' ads", "start_char_pos": 145, "end_char_pos": 168 }, { "type": "R", "before": "With a small customizable parameter \\epsilon which is the step size used in each iteration of the online algorithm, we have shown", "after": "Unlike prior models, we do not assume that the number of query arrivals is known. Due to the stochastic nature of the arrival process considered here, either temporary \"free\" service, i.e., service above the specified budget (which we call \"overdraft\") or under-utilization of the budget (which we call \"underdraft\") is unavoidable. We prove", "start_char_pos": 645, "end_char_pos": 774 }, { "type": "R", "before": "achieves a long-term average revenue which", "after": "can achieve a revenue that", "start_char_pos": 801, "end_char_pos": 843 }, { "type": "R", "before": "and the overdraft level of this algorithm is upper-bounded by", "after": "while ensuring that the overdraft or underdraft is", "start_char_pos": 889, "end_char_pos": 950 }, { "type": "A", "before": null, "after": ", where \\epsilon can be arbitrarily small. With a view towards practice, we also show that one can always operate strictly under the budget. Our algorithm also allows us to quantify the effect of errors in click-through rate estimation on the achieved revenue. We show that we lose at most \\frac{\\Delta", "start_char_pos": 965, "end_char_pos": 965 } ]
[ 0, 222, 502, 644, 1067 ]
1009.0870
2
In this paper, we propose a stochastic model to describe how search service providers charge client companies based on users' queries for the keywords related to these companies' ads by using certain advertisement assignment strategies. We formulate an optimization problem to maximize the long-term average revenue for the service provider under each client's long-term average budget constraint, and design an online algorithm which captures the stochastic properties of users' queries and click-through behaviors. We solve the optimization problem by making connections to scheduling problems in wireless networks, queueing theory and stochastic networks. Unlike prior models, we do not assume that the number of query arrivals is known. Due to the stochastic nature of the arrival process considered here, either temporary "free" service, i.e., service above the specified budget (which we call "overdraft") or under-utilization of the budget (which we call "underdraft") is unavoidable. We prove that our online algorithm can achieve a revenue that is within O(\epsilon) of the optimal revenue while ensuring that the overdraft or underdraft is O(1/\epsilon), where \epsilon can be arbitrarily small. With a view towards practice, we also show that one can always operate strictly under the budget. Our algorithm also allows us to quantify the effect of errors in click-through rate estimation on the achieved revenue . We show that we lose at most \frac{\Delta{1+\Delta} fraction of the revenue if \Delta is the percentage error in click-through rate estimation} . We also show that in the long run, an expected overdraft level of \Omega(\log(1/\epsilon)) is unavoidable (a universal lower bound) under any stationary ad assignment algorithm which achieves a long-term average revenue within O(\epsilon) of the offline optimum.
In this paper, we propose a stochastic model to describe how search service providers charge client companies based on users' queries for the keywords related to these companies' ads by using certain advertisement assignment strategies. We formulate an optimization problem to maximize the long-term average revenue for the service provider under each client's long-term average budget constraint, and design an online algorithm which captures the stochastic properties of users' queries and click-through behaviors. We solve the optimization problem by making connections to scheduling problems in wireless networks, queueing theory and stochastic networks. Unlike prior models, we do not assume that the number of query arrivals is known. Due to the stochastic nature of the arrival process considered here, either temporary "free" service, i.e., service above the specified budget or under-utilization of the budget is unavoidable. We prove that our online algorithm can achieve a revenue that is within O(\epsilon) of the optimal revenue while ensuring that the overdraft or underdraft is O(1/\epsilon), where \epsilon can be arbitrarily small. With a view towards practice, we can show that one can always operate strictly under the budget. In addition, we extend our results to a click-through rate maximization model, and also show how our algorithm can be modified to handle non-stationary query arrival processes and clients with short-term contracts. Our algorithm allows us to quantify the effect of errors in click-through rate estimation on the achieved revenue {1+\Delta} fraction of the revenue if \Delta is the percentage error in click-through rate estimation} . We also show that in the long run, an expected overdraft level of \Omega(\log(1/\epsilon)) is unavoidable (a universal lower bound) under any stationary ad assignment algorithm which achieves a long-term average revenue within O(\epsilon) of the offline optimum.
[ { "type": "D", "before": "(which we call \"overdraft\")", "after": null, "start_char_pos": 884, "end_char_pos": 911 }, { "type": "D", "before": "(which we call \"underdraft\")", "after": null, "start_char_pos": 947, "end_char_pos": 975 }, { "type": "R", "before": "also", "after": "can", "start_char_pos": 1239, "end_char_pos": 1243 }, { "type": "R", "before": "Our algorithm also", "after": "In addition, we extend our results to a click-through rate maximization model, and also show how our algorithm can be modified to handle non-stationary query arrival processes and clients with short-term contracts. Our algorithm", "start_char_pos": 1304, "end_char_pos": 1322 }, { "type": "D", "before": ". We show that we lose at most \\frac{\\Delta", "after": null, "start_char_pos": 1423, "end_char_pos": 1466 } ]
[ 0, 236, 516, 658, 740, 991, 1205, 1303, 1424, 1570 ]
1009.0932
1
We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled jump-diffusion evolving in a multi-dimensional Euclidean space. In this game, the controller affects both the drift and the volatility terms of the state process. Under appropriate conditions, we show that the lower value function of this game is a viscosity solution to an obstacle problem for a Hamilton-Jacobi-Bellman equation , by generalizing the weak dynamic programming principles in 3 .
We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled diffusion evolving in a multi-dimensional Euclidean space. In this game, the controller affects both the drift and the volatility terms of the state process. Under appropriate conditions, we show that the game has a value and the value function is the unique viscosity solution to an obstacle problem for a Hamilton-Jacobi-Bellman equation .
[ { "type": "R", "before": "jump-diffusion", "after": "diffusion", "start_char_pos": 118, "end_char_pos": 132 }, { "type": "R", "before": "lower value function of this game is a", "after": "game has a value and the value function is the unique", "start_char_pos": 328, "end_char_pos": 366 }, { "type": "D", "before": ", by generalizing the weak dynamic programming principles in", "after": null, "start_char_pos": 448, "end_char_pos": 508 }, { "type": "D", "before": "3", "after": null, "start_char_pos": 509, "end_char_pos": 510 } ]
[ 0, 181, 280 ]
1009.2329
1
A tick size is the smallest increment of a security price. Tick size can affect security price in direct and indirect ways. It is clear that at the shortest time scale on which individual orders are placed the tick size has a major role which affects where limit orders can be placed, the bid-ask spread, etc. This is the realm of market microstructure and in fact there is a vast literature on the role of tick size on market microstructure. However, tick size can also affect price properties at longer time scales, and relatively less is known about the effect of tick size on the statistical properties of prices. The present paper is divided in two parts. In the first we review the effect of tick size change on the market microstructure and the diffusion properties of prices. The second part presents original results obtained by investigating the tick size changes occurring at the New York Stock Exchange (NYSE). We show that tick size change has three effects on price diffusion. First, as already shown in the literature, tick size affects price return distribution at an aggregate time scale. Second, reducing the tick size typically leads to an increase of volatility clustering. We give a possible mechanistic explanation for this effect, but clearly more investigation is needed to understand the origin of this relation. Third, we explicitly show that the ability of the subordination hypothesis in explaining fat tails of returns and volatility clustering is strongly dependent on tick size. While for large tick sizes the subordination hypothesis has significant explanatory power, for small tick sizes we show that subordination is not the main driver of these two important stylized facts of financial market.
A tick size is the smallest increment of a security price. It is clear that at the shortest time scale on which individual orders are placed the tick size has a major role which affects where limit orders can be placed, the bid-ask spread, etc. This is the realm of market microstructure and there is a vast literature on the role of tick size on market microstructure. However, tick size can also affect price properties at longer time scales, and relatively less is known about the effect of tick size on the statistical properties of prices. The present paper is divided in two parts. In the first we review the effect of tick size change on the market microstructure and the diffusion properties of prices. The second part presents original results obtained by investigating the tick size changes occurring at the New York Stock Exchange (NYSE). We show that tick size change has three effects on price diffusion. First, as already shown in the literature, tick size affects price return distribution at an aggregate time scale. Second, reducing the tick size typically leads to an increase of volatility clustering. We give a possible mechanistic explanation for this effect, but clearly more investigation is needed to understand the origin of this relation. Third, we explicitly show that the ability of the subordination hypothesis in explaining fat tails of returns and volatility clustering is strongly dependent on tick size. While for large tick sizes the subordination hypothesis has significant explanatory power, for small tick sizes we show that subordination is not the main driver of these two important stylized facts of financial market.
[ { "type": "D", "before": "Tick size can affect security price in direct and indirect ways.", "after": null, "start_char_pos": 59, "end_char_pos": 123 }, { "type": "D", "before": "in fact", "after": null, "start_char_pos": 357, "end_char_pos": 364 } ]
[ 0, 58, 123, 309, 442, 617, 660, 783, 922, 990, 1105, 1193, 1337, 1509 ]
1009.2782
1
In this paper, we study stochastic volatility models in regimes where the maturity is small but large compared to the mean-reversion time of the stochastic volatility factor. The problem falls in the class of averaging/homogenization problems for nonlinear HJB type equations where the "fast variable" lives in a non-compact space. We develop a general argument based on viscosity solutions which we apply to the two regimes studied in the paper. We derive a large deviation principle and we deduce asymptotic prices for Out-of-The-Money call and put options, and their corresponding implied volatilities. The results of this paper generalize the ones obtained in \mbox{%DIFAUXCMD FFF , %DIFDELCMD < {\it %%% Short maturity asymptotic for a fast mean reverting Heston stochastic volatility model , SIAM Journal on Financial Mathematics, Vol . 1 , 2010) by a moment generating function computation in the particular case of the Heston model.
In this paper, we study stochastic volatility models in regimes where the maturity is small , but large compared to the mean-reversion time of the stochastic volatility factor. The problem falls in the class of averaging/homogenization problems for nonlinear HJB-type equations where the "fast variable" lives in a noncompact space. We develop a general argument based on viscosity solutions which we apply to the two regimes studied in the paper. We derive a large deviation principle , and we deduce asymptotic prices for out-of-the-money call and put options, and their corresponding implied volatilities. The results of this paper generalize the ones obtained in Feng , %DIFDELCMD < {\it %%% Forde and Fouque SIAM J. Financial Math . 1 ( 2010) 126-141 by a moment generating function computation in the particular case of the Heston model.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "R", "before": "HJB type", "after": "HJB-type", "start_char_pos": 258, "end_char_pos": 266 }, { "type": "R", "before": "non-compact", "after": "noncompact", "start_char_pos": 314, "end_char_pos": 325 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 486, "end_char_pos": 486 }, { "type": "R", "before": "Out-of-The-Money", "after": "out-of-the-money", "start_char_pos": 523, "end_char_pos": 539 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD FFF", "after": "Feng", "start_char_pos": 666, "end_char_pos": 686 }, { "type": "D", "before": "Short maturity asymptotic for a fast mean reverting Heston stochastic volatility model", "after": null, "start_char_pos": 711, "end_char_pos": 797 }, { "type": "R", "before": ", SIAM Journal on Financial Mathematics, Vol", "after": "Forde and Fouque", "start_char_pos": 798, "end_char_pos": 842 }, { "type": "A", "before": null, "after": "SIAM J. Financial Math", "start_char_pos": 843, "end_char_pos": 843 }, { "type": "R", "before": ",", "after": "(", "start_char_pos": 848, "end_char_pos": 849 }, { "type": "A", "before": null, "after": "126-141", "start_char_pos": 856, "end_char_pos": 856 } ]
[ 0, 175, 332, 447, 607 ]
1009.2896
1
Financial leverage can be regarded as an object of choice or a decision. We show how this optics allows perceiving the recently introduced metrics of see-through-leverage, which proved to be very useful in understanding the phenomenology of the recent economic crisis.
The article presents a translation of some widespread financial terminology into the language of decision theory. For instance, financial leverage can be regarded as an object of choice or a decision. We show how the optics of decision theory allows perceiving the recently introduced metrics of see-through-leverage, which proved to be very useful in understanding the phenomenology of the recent economic crisis.
[ { "type": "R", "before": "Financial", "after": "The article presents a translation of some widespread financial terminology into the language of decision theory. For instance, financial", "start_char_pos": 0, "end_char_pos": 9 }, { "type": "R", "before": "this optics", "after": "the optics of decision theory", "start_char_pos": 85, "end_char_pos": 96 } ]
[ 0, 72 ]
1009.3247
1
This paper is concerned with cost optimization of an insurance company. The surplus of the insurance company is modeled by a controlled regime switching diffusion, where the regime switching mechanism provides the fluctuations of the random environment. A weaker sufficient condition than that of \mbox{%DIFAUXCMD \cite[Section V.2]{FlemingS for the continuity of the value function is obtained. Further, the value function is shown to be a viscosity solution of a Hamilton-Jacobian-Bellman equation.
This paper is concerned with cost optimization of an insurance company. The surplus of the insurance company is modeled by a controlled regime switching diffusion, where the regime switching mechanism provides the fluctuations of the random environment. The goal is to find an optimal control that minimizes the total cost up to a stochastic exit time. A weaker sufficient condition than that of (Fleming and Soner 2006, Section V.2) for the continuity of the value function is obtained. Further, the value function is shown to be a viscosity solution of a Hamilton-Jacobian-Bellman equation.
[ { "type": "A", "before": null, "after": "The goal is to find an optimal control that minimizes the total cost up to a stochastic exit time.", "start_char_pos": 254, "end_char_pos": 254 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD \\cite[Section V.2]{FlemingS", "after": "(Fleming and Soner 2006, Section V.2)", "start_char_pos": 298, "end_char_pos": 342 } ]
[ 0, 71, 253, 396 ]
1009.3479
1
We examine a class of Brownian based models which produce tractable incomplete equilibria. The models are based on finitely many investors with heterogeneous exponential utilities over intermediate consumption who receive partially unspanned income . The investors can trade continuously on a finite time interval in a money market account as well as a risky security. Besides establishing the existence of an equilibrium, our main result shows that the resulting equilibrium can display a lower risk-free rate and a higher risk premium relative to the usual Pareto efficient equilibrium in complete markets. Consequently, our model can simultaneously help explaining the risk-free rate and equity premium puzzles .
In an incomplete continuous-time securities market with uncertainty generated by Brownian motions, we derive closed-form solutions for the equilibrium interest rate and market price of risk processes. The economy has a finite number of heterogeneous exponential utility investors, who receive partially unspanned income and can trade continuously on a finite time-interval in a money market account and a single risky security. Besides establishing the existence of an equilibrium, our main result shows that if the investors' unspanned income has stochastic countercyclical volatility, the resulting equilibrium can display both lower interest rates and higher risk premia compared to the Pareto efficient equilibrium in an otherwise identical complete market .
[ { "type": "R", "before": "We examine a class of Brownian based models which produce tractable incomplete equilibria. The models are based on finitely many investors with heterogeneous exponential utilities over intermediate consumption", "after": "In an incomplete continuous-time securities market with uncertainty generated by Brownian motions, we derive closed-form solutions for the equilibrium interest rate and market price of risk processes. The economy has a finite number of heterogeneous exponential utility investors,", "start_char_pos": 0, "end_char_pos": 209 }, { "type": "R", "before": ". The investors", "after": "and", "start_char_pos": 249, "end_char_pos": 264 }, { "type": "R", "before": "time interval", "after": "time-interval", "start_char_pos": 300, "end_char_pos": 313 }, { "type": "R", "before": "as well as a", "after": "and a single", "start_char_pos": 340, "end_char_pos": 352 }, { "type": "R", "before": "the", "after": "if the investors' unspanned income has stochastic countercyclical volatility, the", "start_char_pos": 450, "end_char_pos": 453 }, { "type": "R", "before": "a lower risk-free rate and a higher risk premium relative to the usual", "after": "both lower interest rates and higher risk premia compared to the", "start_char_pos": 488, "end_char_pos": 558 }, { "type": "R", "before": "complete markets. Consequently, our model can simultaneously help explaining the risk-free rate and equity premium puzzles", "after": "an otherwise identical complete market", "start_char_pos": 591, "end_char_pos": 713 } ]
[ 0, 90, 250, 368, 608 ]
1009.3638
1
In practice daily volatility of portfolio returns is transformed to longer holding periods by multiplying by the square-root of time which assumes that returns are not serially correlated. Under this assumption this procedure of scaling can also be applied to contributions to volatility of the assets in the portfolio. Trading at exchanges located in distant time zones can lead to significant serial cross-correlations of the returns of these assets when using close prices as is usually done in practice . These serial correlations cause the square-root-of-time rule to fail. Moreover volatility contributions in this setting turn out to be misleading due to non-synchronous correlations. We address this issue and provide alternative procedures for scaling volatility and calculating risk contributions for arbitrary holding periods.
In practice daily volatility of portfolio returns is transformed to longer holding periods by multiplying by the square-root of time which assumes that returns are not serially correlated. Under this assumption this procedure of scaling can also be applied to contributions to volatility of the assets in the portfolio. Close prices are often used to calculate the profit and loss of a portfolio. Trading at exchanges located in distant time zones this can lead to significant serial cross-correlations of the closing-time returns of the assets in the portfolio . These serial correlations cause the square-root-of-time rule to fail. Moreover volatility contributions in this setting turn out to be misleading due to non-synchronous correlations. We address this issue and provide alternative procedures for scaling volatility and calculating risk contributions for arbitrary holding periods.
[ { "type": "A", "before": null, "after": "Close prices are often used to calculate the profit and loss of a portfolio.", "start_char_pos": 320, "end_char_pos": 320 }, { "type": "A", "before": null, "after": "this", "start_char_pos": 372, "end_char_pos": 372 }, { "type": "R", "before": "returns of these assets when using close prices as is usually done in practice", "after": "closing-time returns of the assets in the portfolio", "start_char_pos": 430, "end_char_pos": 508 } ]
[ 0, 188, 319, 510, 580, 693 ]
1009.3753
1
The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically studied for broad return distributions and returns generated by a GARCH process .
The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process . Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period .
[ { "type": "R", "before": "studied", "after": "verified", "start_char_pos": 469, "end_char_pos": 476 }, { "type": "A", "before": null, "after": ". Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period", "start_char_pos": 549, "end_char_pos": 549 } ]
[ 0, 158, 337, 412 ]
1009.3760
1
Within the context of risk integration, we introduce in risk measurement stochastic holding period (SHP) models. This is done in order to obtain a `liquidity-adjusted risk measure' characterized by the absence of a fixed time horizon. The underlying assumption is that - due to changes on market liquidity conditions - one operates along an `operational time' to which the P&L process of liquidating a market portfolio is referred. This framework leads to a mixture of distributions for the portfolio returns, potentially allowing for skewness, heavy tails and extreme scenarios. We analyze the impact of possible distributional choices for the SHP. In a multivariate setting, we hint at the possible introduction of dependent SHP processes, which potentially lead to non linear dependence among the P&L processes and therefore to tail dependence across assets in the portfolio, although this may require drastic choices on the SHP distributions. We finally discuss potential developments following future availability of market data.
Within the context of risk integration, we introduce in risk measurement stochastic holding period (SHP) models. This is done in order to obtain a `liquidity-adjusted risk measure' characterized by the absence of a fixed time horizon. The underlying assumption is that - due to changes on market liquidity conditions - one operates along an `operational time' to which the P&L process of liquidating a market portfolio is referred. This framework leads to a mixture of distributions for the portfolio returns, potentially allowing for skewness, heavy tails and extreme scenarios. We analyze the impact of possible distributional choices for the SHP. In a multivariate setting, we hint at the possible introduction of dependent SHP processes, which potentially lead to non linear dependence among the P&L processes and therefore to tail dependence across assets in the portfolio, although this may require drastic choices on the SHP distributions. We also find that increasing dependence as measured by Kendall's tau through common SHP's appears to be unfeasible. We finally discuss potential developments following future availability of market data.
[ { "type": "A", "before": null, "after": "also find that increasing dependence as measured by Kendall's tau through common SHP's appears to be unfeasible. We", "start_char_pos": 950, "end_char_pos": 950 } ]
[ 0, 112, 234, 431, 579, 649, 946 ]
1009.4211
2
We consider a stochastic volatility model with L\'evy jumps for a log-return process Z=(Z_{t})_{t\geq 0} of the form Z=U+X, where U=(U_{t})_{t\geq 0} is a classical stochastic volatility process and X=(X_{t})_{t\geq 0} is an independent L\'evy process with absolutely continuous L\'evy measure \nu. Small-time expansions, of arbitrary polynomial order, in time-t, are obtained for the tails \bbp(Z_{t}\geq z), z>0, and for the call-option prices \bbe(e^{z+Z_{t}}-1)_{+}, z\neq 0, assuming smoothness conditions on the L\'evy density {\PaleGrey away from the origin and a small-time large deviation principle on U. Our approach allows for a unified treatment of general payoff functions of the form \varphi (x){\bf 1}_{xz} for smooth functions \varphi and z>0. As a consequence of our tail expansions, the polynomial expansions in t of the transition densities f_{t} are also obtained {\Green under mild conditions.
We consider a stochastic volatility model with L\'evy jumps for a log-return process Z=(Z_{t})_{t\geq 0} of the form Z=U+X, where U=(U_{t})_{t\geq 0} is a classical stochastic volatility process and X=(X_{t})_{t\geq 0} is an independent L\'evy process with absolutely continuous L\'evy measure \nu. Small-time expansions, of arbitrary polynomial order, in time-t, are obtained for the tails \bbp(Z_{t}\geq z), z>0, and for the call-option prices \bbe(e^{z+Z_{t}}-1)_{+}, z\neq 0, assuming smoothness conditions on the {\PaleGrey density of \nu away from the origin and a small-time large deviation principle on U. Our approach allows for a unified treatment of general payoff functions of the form \phi (x){\bf 1}_{xz} for smooth functions \phi and z>0. As a consequence of our tail expansions, the polynomial expansions in t of the transition densities f_{t} are also {\Green obtained under mild conditions.
[ { "type": "D", "before": "L\\'evy density", "after": null, "start_char_pos": 518, "end_char_pos": 532 }, { "type": "A", "before": null, "after": "density of \\nu", "start_char_pos": 544, "end_char_pos": 544 }, { "type": "R", "before": "\\varphi", "after": "\\phi", "start_char_pos": 699, "end_char_pos": 706 }, { "type": "R", "before": "\\varphi", "after": "\\phi", "start_char_pos": 744, "end_char_pos": 751 }, { "type": "D", "before": "obtained", "after": null, "start_char_pos": 876, "end_char_pos": 884 }, { "type": "A", "before": null, "after": "obtained", "start_char_pos": 893, "end_char_pos": 893 } ]
[ 0, 298, 760 ]
1009.4330
1
We present an implementation of a general purpose GPU-Molecular Dynamics code named LAMMPScuda which is based on LAMMPS. It exhibits excellent scaling behavior, allowing for the efficient usage of hundreds of GPUs for a single simulation . At the same time each GPU provides the equivalent performance of approximately 5 modern Quad Core CPUs. By supporting a wide array of force fields LAMMPScuda allows to model many different condensed matter systems including URLanic glasses, polymer systems and molecular fluids. In the paper implementation details are presented as well as performance measurements with a number of different condensed systems .
We present a GPU implementation of LAMMPS, a widely-used parallel molecular dynamics (MD) software package, and show 5x to 13x single node speedups versus the CPU-only version of LAMMPS. This new CUDA package for LAMMPS also enables multi-GPU simulation on hybrid heterogeneous clusters, using MPI for inter-node communication, CUDA kernels on the GPU for all methods working with particle data, and standard LAMMPS C++ code for CPU execution. Cell and neighbor list approaches are compared for best performance on GPUs, with thread-per-atom and block-per-atom neighbor list variants showing best performance at low and high neighbor counts, respectively. Computational performance results of GPU-enabled LAMMPS are presented for a variety of materials classes (e.g. biomolecules, polymers, metals, semiconductors), along with a speed comparison versus other available GPU-enabled MD software. Finally, we show strong and weak scaling performance on a CPU/GPU cluster using up to 128 dual GPU nodes .
[ { "type": "R", "before": "an implementation of a general purpose GPU-Molecular Dynamics code named LAMMPScuda which is based on LAMMPS. It exhibits excellent scaling behavior, allowing for the efficient usage of hundreds of GPUs for a single simulation . At the same time each GPU provides the equivalent performance of approximately 5 modern Quad Core CPUs. By supporting a wide array of force fields LAMMPScuda allows to model many different condensed matter systems including URLanic glasses, polymer systems and molecular fluids. In the paper implementation details are presented as well as performance measurements with", "after": "a GPU implementation of LAMMPS,", "start_char_pos": 11, "end_char_pos": 609 }, { "type": "R", "before": "number of different condensed systems", "after": "widely-used parallel molecular dynamics (MD) software package, and show 5x to 13x single node speedups versus the CPU-only version of LAMMPS. This new CUDA package for LAMMPS also enables multi-GPU simulation on hybrid heterogeneous clusters, using MPI for inter-node communication, CUDA kernels on the GPU for all methods working with particle data, and standard LAMMPS C++ code for CPU execution. Cell and neighbor list approaches are compared for best performance on GPUs, with thread-per-atom and block-per-atom neighbor list variants showing best performance at low and high neighbor counts, respectively. Computational performance results of GPU-enabled LAMMPS are presented for a variety of materials classes (e.g. biomolecules, polymers, metals, semiconductors), along with a speed comparison versus other available GPU-enabled MD software. Finally, we show strong and weak scaling performance on a CPU/GPU cluster using up to 128 dual GPU nodes", "start_char_pos": 612, "end_char_pos": 649 } ]
[ 0, 120, 239, 343, 518 ]
1009.5800
1
Based on the temporal distributions of clustered segments in the time series of the ten Dow Jones US (DJUS) economic sector indices, we calculated their cross correlations over the period February 2000 to August 2008, the two-year intervals 2002--2003, 2004--2005, 2008--2009, and also over 11 corresponding segments within the present financial crisis . From these cross-correlation matrices, we constructed minimal spanning trees (MSTs) of the US economy at the sector level. In all MSTs, a core-fringe structure is found, with CY, IN, and NC consistently making up the core, and BM, EN, HC, TL, UT residing predominantly on the fringe. We saw that shocks accompanying volatility movements always start at the fringe, sometimes in conjunction with anomalously high cross correlations here, and propagate inwards to the core of all MSTs of the 11 statistically-stationary corresponding segments. Most of these volatility shocks originate within the domestic fringe sectors, HC, TL, and UT, in the US economy. More importantly, we find that the MSTs can be classified into two distinct, statistically robust, topologies: (i) star-like, with IN at the center, associated with low-volatility economic growth; and (ii) chain-like, associated with high-volatility economic crisis . When we examined successive corresponding segments within the present housing bubble financial crisis, we find that each MST can be obtained from the one before it through a minimal set of primitive rearrangements, each representing a statistically significant change in the cross correlations of the sectors involved . Finally, we present statistical evidence, based on the emergence of a star-like MST in Sep 2009, and the MST staying robustly star-like throughout the Greek Debt Crisis, that the US economy is on track to a recovery.
We calculated the cross correlations between the half-hourly times series of the ten Dow Jones US economic sectors over the period February 2000 to August 2008, the two-year intervals 2002--2003, 2004--2005, 2008--2009, and also over 11 segments within the present financial crisis , to construct minimal spanning trees (MSTs) of the US economy at the sector level. In all MSTs, a core-fringe structure is found, with consumer goods, consumer services, and the industrials consistently making up the core, and basic materials, oil and gas, healthcare, telecommunications, and utilities residing predominantly on the fringe. More importantly, we find that the MSTs can be classified into two distinct, statistically robust, topologies: (i) star-like, with the industrials at the center, associated with low-volatility economic growth; and (ii) chain-like, associated with high-volatility economic crisis . Finally, we present statistical evidence, based on the emergence of a star-like MST in Sep 2009, and the MST staying robustly star-like throughout the Greek Debt Crisis, that the US economy is on track to a recovery.
[ { "type": "R", "before": "Based on the temporal distributions of clustered segments in the time", "after": "We calculated the cross correlations between the half-hourly times", "start_char_pos": 0, "end_char_pos": 69 }, { "type": "R", "before": "(DJUS) economic sector indices, we calculated their cross correlations", "after": "economic sectors", "start_char_pos": 101, "end_char_pos": 171 }, { "type": "D", "before": "corresponding", "after": null, "start_char_pos": 294, "end_char_pos": 307 }, { "type": "R", "before": ". From these cross-correlation matrices, we constructed", "after": ", to construct", "start_char_pos": 353, "end_char_pos": 408 }, { "type": "R", "before": "CY, IN, and NC", "after": "consumer goods, consumer services, and the industrials", "start_char_pos": 530, "end_char_pos": 544 }, { "type": "R", "before": "BM, EN, HC, TL, UT", "after": "basic materials, oil and gas, healthcare, telecommunications, and utilities", "start_char_pos": 582, "end_char_pos": 600 }, { "type": "D", "before": "We saw that shocks accompanying volatility movements always start at the fringe, sometimes in conjunction with anomalously high cross correlations here, and propagate inwards to the core of all MSTs of the 11 statistically-stationary corresponding segments. Most of these volatility shocks originate within the domestic fringe sectors, HC, TL, and UT, in the US economy.", "after": null, "start_char_pos": 639, "end_char_pos": 1009 }, { "type": "R", "before": "IN", "after": "the industrials", "start_char_pos": 1141, "end_char_pos": 1143 }, { "type": "D", "before": ". When we examined successive corresponding segments within the present housing bubble financial crisis, we find that each MST can be obtained from the one before it through a minimal set of primitive rearrangements, each representing a statistically significant change in the cross correlations of the sectors involved", "after": null, "start_char_pos": 1276, "end_char_pos": 1595 } ]
[ 0, 354, 477, 638, 896, 1009, 1206, 1277, 1597 ]
1010.0041
1
We present an analytical framework which enables performance evaluation of different multi-channel multi-stage spectrum sensing protocols for Opportunistic Spectrum Access networks. Analyzed performance metrics include the average secondary user throughput and the average collision probability between the primary and secondary users. The analysis framework takes into account buffering of incoming secondary user traffic, parallel and single channel access, as well as prolonged channel observation periods at the first and last stage of sensing. The main results show that when a constraint is given upon the probability of primary user mis-detection , multi-stage sensing is in most cases superior to the single stage sensing counterpart. Further , prolonged channel observation at the first sensing stage decreases the collision probability considerably while keeping the throughput at an acceptable level. Finally, in most network scenariosconsidered in this work, two stages of sensing are enough to obtain the maximum throughput in Opportunistic Spectrum Access communication.
We present an analytical framework which enables performance evaluation of different multi-channel multi-stage spectrum sensing algorithms for Opportunistic Spectrum Access networks. The analytical framework models the following: number of sensing stages, physical layer sensing techniques, single and parallel channel access, primary and secondary user traffic, as well as buffering of incoming secondary user traffic. Analyzed performance metrics include the average secondary user throughput and the average collision probability between primary and secondary users. Our results show that when the probability of primary user mis-detection is constrained, the performance of multi-stage sensing is in most cases superior to the single stage sensing counterpart. Besides , prolonged channel observation at the first stage of sensing decreases the collision probability considerably , while keeping the throughput at an acceptable level. Finally, in realistic primary user traffic scenarios, using two stages of sensing maximizes throughput while meeting constraints subjected by Opportunistic Spectrum Access communication.
[ { "type": "R", "before": "protocols", "after": "algorithms", "start_char_pos": 128, "end_char_pos": 137 }, { "type": "A", "before": null, "after": "The analytical framework models the following: number of sensing stages, physical layer sensing techniques, single and parallel channel access, primary and secondary user traffic, as well as buffering of incoming secondary user traffic.", "start_char_pos": 182, "end_char_pos": 182 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 304, "end_char_pos": 307 }, { "type": "R", "before": "The analysis framework takes into account buffering of incoming secondary user traffic, parallel and single channel access, as well as prolonged channel observation periods at the first and last stage of sensing. The main", "after": "Our", "start_char_pos": 337, "end_char_pos": 558 }, { "type": "D", "before": "a constraint is given upon", "after": null, "start_char_pos": 582, "end_char_pos": 608 }, { "type": "R", "before": ",", "after": "is constrained, the performance of", "start_char_pos": 655, "end_char_pos": 656 }, { "type": "R", "before": "Further", "after": "Besides", "start_char_pos": 744, "end_char_pos": 751 }, { "type": "R", "before": "sensing stage", "after": "stage of sensing", "start_char_pos": 797, "end_char_pos": 810 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 860, "end_char_pos": 860 }, { "type": "R", "before": "most network scenariosconsidered in this work,", "after": "realistic primary user traffic scenarios, using", "start_char_pos": 926, "end_char_pos": 972 }, { "type": "R", "before": "are enough to obtain the maximum throughput in", "after": "maximizes throughput while meeting constraints subjected by", "start_char_pos": 995, "end_char_pos": 1041 } ]
[ 0, 181, 336, 549, 743, 913 ]
1010.0041
2
We present an analytical framework which enables performance evaluation of different multi-channel multi-stage spectrum sensing algorithms for Opportunistic Spectrum Access networks. The analytical framework models the following : number of sensing stages, physical layer sensing techniques , single and parallel channel access, primary and secondary user traffic, as well as buffering of incoming secondary user traffic . Analyzed performance metrics include the average secondary user throughput and the average collision probability between primary and secondary users. Our results show that when the probability of primary user mis-detection is constrained, the performance of multi-stage sensing is in most cases superior to the single stage sensing counterpart. Besides, prolonged channel observation at the first stage of sensing decreases the collision probability considerably, while keeping the throughput at an acceptable level. Finally, in realistic primary user traffic scenarios, using two stages of sensing maximizes throughput while meeting constraints subjected by Opportunistic Spectrum Access communication.
Multi-stage sensing is a novel concept that refers to a general class of spectrum sensing algorithms that divide the sensing process into a number of sequential stages. The number of sensing stages and the sensing technique per stage can be used to optimize performance with respect to secondary user throughput and the collision probability between primary and secondary users. So far, the impact of multi-stage sensing on network throughput and collision probability for a realistic network model is relatively unexplored. Therefore, we present the first analytical framework which enables performance evaluation of different multi-channel multi-stage spectrum sensing algorithms for Opportunistic Spectrum Access networks. The contribution of our work lies in studying the effect of the following parameters on performance : number of sensing stages, physical layer sensing techniques and durations per each stage , single and parallel channel sensing and access, number of available channels, primary and secondary user traffic, buffering of incoming secondary user traffic , as well as MAC layer sensing algorithms . Analyzed performance metrics include the average secondary user throughput and the average collision probability between primary and secondary users. Our results show that when the probability of primary user mis-detection is constrained, the performance of multi-stage sensing is , in most cases , superior to the single stage sensing counterpart. Besides, prolonged channel observation at the first stage of sensing decreases the collision probability considerably, while keeping the throughput at an acceptable level. Finally, in realistic primary user traffic scenarios, using two stages of sensing provides a good balance between secondary users throughput and collision probability while meeting successful detection constraints subjected by Opportunistic Spectrum Access communication.
[ { "type": "R", "before": "We present an", "after": "Multi-stage sensing is a novel concept that refers to a general class of spectrum sensing algorithms that divide the sensing process into a number of sequential stages. The number of sensing stages and the sensing technique per stage can be used to optimize performance with respect to secondary user throughput and the collision probability between primary and secondary users. So far, the impact of multi-stage sensing on network throughput and collision probability for a realistic network model is relatively unexplored. Therefore, we present the first", "start_char_pos": 0, "end_char_pos": 13 }, { "type": "R", "before": "analytical framework models the following", "after": "contribution of our work lies in studying the effect of the following parameters on performance", "start_char_pos": 187, "end_char_pos": 228 }, { "type": "A", "before": null, "after": "and durations per each stage", "start_char_pos": 291, "end_char_pos": 291 }, { "type": "R", "before": "access,", "after": "sensing and access, number of available channels,", "start_char_pos": 322, "end_char_pos": 329 }, { "type": "D", "before": "as well as", "after": null, "start_char_pos": 366, "end_char_pos": 376 }, { "type": "A", "before": null, "after": ", as well as MAC layer sensing algorithms", "start_char_pos": 422, "end_char_pos": 422 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 706, "end_char_pos": 706 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 721, "end_char_pos": 721 }, { "type": "R", "before": "maximizes throughput while meeting", "after": "provides a good balance between secondary users throughput and collision probability while meeting successful detection", "start_char_pos": 1026, "end_char_pos": 1060 } ]
[ 0, 182, 424, 574, 771, 943 ]
1010.0090
1
Options that allow the holder to extend the maturity by paying an additional fixed amount found many applications in finance. Closed-form solution for these options first appeared in Longstaff (1990) for the case when underlying asset follows a geometric Brownian motion with the constant interest rate and volatility. Unfortunately there are several typographical errors in the published formula for the holder-extendible put. These are subsequently repeated in textbooks, other papers and software. This short paperpresents a correct formula. Also, to generalize, the option price is derived for the case of a geometric Brownian motion with the time-dependent drift and volatility .
Financial contracts with options that allow the holder to extend the contract maturity by paying an additional fixed amount found many applications in finance. Closed-form solutions for the price of these options have appeared in the literature for the case when the contract underlying asset follows a geometric Brownian motion with the constant interest rate , volatility, and non-negative "dividend" yield. In this paper, the option price is derived for the case of the underlying asset that follows a geometric Brownian motion with the time-dependent drift and volatility which is important to use the solutions in real life applications. The formulas are derived for the drift that may include non-negative or negative "dividend" yield. The latter case results in a new solution type that has not been studied in the literature. Several typographical errors in the formula for the holder-extendible put, typically repeated in textbooks and software, are corrected .
[ { "type": "R", "before": "Options", "after": "Financial contracts with options", "start_char_pos": 0, "end_char_pos": 7 }, { "type": "A", "before": null, "after": "contract", "start_char_pos": 44, "end_char_pos": 44 }, { "type": "R", "before": "solution for these options first appeared in Longstaff (1990)", "after": "solutions for the price of these options have appeared in the literature", "start_char_pos": 139, "end_char_pos": 200 }, { "type": "A", "before": null, "after": "the contract", "start_char_pos": 219, "end_char_pos": 219 }, { "type": "R", "before": "and volatility. Unfortunately there are several typographical errors in the published formula for the holder-extendible put. These are subsequently repeated in textbooks, other papers and software. This short paperpresents a correct formula. Also, to generalize,", "after": ", volatility, and non-negative \"dividend\" yield. In this paper,", "start_char_pos": 305, "end_char_pos": 567 }, { "type": "A", "before": null, "after": "the underlying asset that follows", "start_char_pos": 612, "end_char_pos": 612 }, { "type": "A", "before": null, "after": "which is important to use the solutions in real life applications. The formulas are derived for the drift that may include non-negative or negative \"dividend\" yield. The latter case results in a new solution type that has not been studied in the literature. Several typographical errors in the formula for the holder-extendible put, typically repeated in textbooks and software, are corrected", "start_char_pos": 686, "end_char_pos": 686 } ]
[ 0, 126, 320, 429, 502, 546 ]
1010.0208
1
A step by step procedure to derive analytically the exact steady state probability density function of well known kinetic wealth exchange economic models is shown. This gives as a result an integro-differential equation, which can be solved analytically in some cases and numerically in others. This technique should provide some guidance into the type of probability density functions that can be derived from particular economic agent exchange rules, or for that matter, any other kinetic model of gases with particular collision physics.
A step by step procedure to derive analytically the exact dynamical evolution equations of the probability density functions (PDF) of well known kinetic wealth exchange economic models is shown. This technique gives a dynamical insight into the evolution of the PDF, e.g., allowing the calculation of its relaxation times. Their equilibrium PDFs can also be calculated by finding its stationary solutions. This gives as a result an integro-differential equation, which can be solved analytically in some cases and numerically in others. This should provide some guidance into the type of probability density functions that can be derived from particular economic agent exchange rules, or for that matter, any other kinetic model of gases with particular collision physics.
[ { "type": "R", "before": "steady state probability density function", "after": "dynamical evolution equations of the probability density functions (PDF)", "start_char_pos": 58, "end_char_pos": 99 }, { "type": "R", "before": "gives", "after": "technique gives a dynamical insight into the evolution of the PDF, e.g., allowing the calculation of its relaxation times. Their equilibrium PDFs can also be calculated by finding its stationary solutions. This gives", "start_char_pos": 169, "end_char_pos": 174 }, { "type": "D", "before": "technique", "after": null, "start_char_pos": 300, "end_char_pos": 309 } ]
[ 0, 163, 294 ]
1010.1961
1
In a continuous-path semimartingale market model , we perform an initial enlargement of the filtration by including the overall minimum of the numeraire portfolio. We establish that all discounted asset-price processes, when stopped at the time of the overall minimum of the numeraire portfolio, become local martingales under the enlarged filtration. This implies that risk-averse insider traders would refrain from investing in the risky assets before that time. A partial converse to the previous result is also established , showing that the time of the overall minimum of the numeraire portfolio is in a certain sense unique in rendering undesirable the act of undertaking risky positions before it. Our results shed light to the importance of the numeraire portfolio as an indicator of overall market performance.
A continuous-path semimartingale market model with wealth processes discounted by a riskless asset is considered. The numeraire portfolio is the unique strictly positive wealth process that, when used as a benchmark to denominate all other wealth, makes all wealth processes local martingales. It is assumed that the numeraire portfolio exists and that its wealth increases to infinity as time goes to infinity. Under this setting, an initial enlargement of the filtration is performed, by including the overall minimum of the numeraire portfolio. It is established that all nonnegative wealth processes, when stopped at the time of the overall minimum of the numeraire portfolio, become local martingales in the enlarged filtration. This implies that risk-averse insider traders would refrain from investing in the risky assets before that time. A partial converse to the previous result is also established in the case of complete markets , showing that the time of the overall minimum of the numeraire portfolio is in a certain sense unique in rendering undesirable the act of undertaking risky positions before it. The aforementioned results shed light to the importance of the numeraire portfolio as an indicator of overall market performance.
[ { "type": "R", "before": "In a", "after": "A", "start_char_pos": 0, "end_char_pos": 4 }, { "type": "R", "before": ", we perform", "after": "with wealth processes discounted by a riskless asset is considered. The numeraire portfolio is the unique strictly positive wealth process that, when used as a benchmark to denominate all other wealth, makes all wealth processes local martingales. It is assumed that the numeraire portfolio exists and that its wealth increases to infinity as time goes to infinity. Under this setting,", "start_char_pos": 49, "end_char_pos": 61 }, { "type": "A", "before": null, "after": "is performed,", "start_char_pos": 103, "end_char_pos": 103 }, { "type": "R", "before": "We establish that all discounted asset-price", "after": "It is established that all nonnegative wealth", "start_char_pos": 165, "end_char_pos": 209 }, { "type": "R", "before": "under", "after": "in", "start_char_pos": 322, "end_char_pos": 327 }, { "type": "A", "before": null, "after": "in the case of complete markets", "start_char_pos": 528, "end_char_pos": 528 }, { "type": "R", "before": "Our", "after": "The aforementioned", "start_char_pos": 707, "end_char_pos": 710 } ]
[ 0, 164, 352, 465, 706 ]
1010.2061
1
Financial market dynamics is rigorously studied via the exact generalized Langevin equation. Assuming Brownian market self-similarity, the market return memory and autocorrelation functions are derived, which exhibit an oscillatory-decaying behavior and a long-time tail similar to the empirical observations .
Financial market dynamics is rigorously studied via the exact generalized Langevin equation. Assuming market Brownian self-similarity, the market return rate memory and autocorrelation functions are derived, which exhibit an oscillatory-decaying behavior with a long-time tail , similar to empirical observations. Individual stocks are also described via the generalized Langevin equation. They are classified by their relation to the market memory as heavy, neutral and light stocks, possessing different kinds of autocorrelation functions .
[ { "type": "R", "before": "Brownian market", "after": "market Brownian", "start_char_pos": 102, "end_char_pos": 117 }, { "type": "A", "before": null, "after": "rate", "start_char_pos": 153, "end_char_pos": 153 }, { "type": "R", "before": "and", "after": "with", "start_char_pos": 251, "end_char_pos": 254 }, { "type": "R", "before": "similar to the empirical observations", "after": ", similar to empirical observations. Individual stocks are also described via the generalized Langevin equation. They are classified by their relation to the market memory as heavy, neutral and light stocks, possessing different kinds of autocorrelation functions", "start_char_pos": 272, "end_char_pos": 309 } ]
[ 0, 92 ]
1010.2199
1
Cells sense external signal constantly, process it, and make its life-determining decision by using the embedded signal processing facilities. All of those events take place within an individual cell and thus should be studied at the level of single cells. Technical advances in live cell imaging make it possible to observe the time evolution of a protein abundance in single cells. Here we use a computational model, live cell fluorescence microscopy, and quantitative RT-PCR to investigate the translocation dynamics of a protein NF-kB and its biological relevance in single macrophages (RAW264.7 cells) when the cells are stimulated by E. coli lipopolysaccharide persistently. We incorporate into the computational model the signaling pathways of TLR4-MyD88-NF-kB, TNF-R and TNFa autocrine signaling and simulate heterogeneous NF-kB response in single cells, by taking into account the cell-to-cell variability in key protein copy numbers and kinetic rate constants. We present the fascinating yet puzzling NF-kB translocation dynamics as a response to different dosage of E. coli lipopolysaccharide: homogeneous oscillatory patterns of NF-kB for a large dosage and heterogeneous monotone-increasing patterns for a small dosage .
This paper has been withdrawn by the author .
[ { "type": "R", "before": "Cells sense external signal constantly, process it, and make its life-determining decision by using the embedded signal processing facilities. All of those events take place within an individual cell and thus should be studied at the level of single cells. Technical advances in live cell imaging make it possible to observe the time evolution of a protein abundance in single cells. Here we use a computational model, live cell fluorescence microscopy, and quantitative RT-PCR to investigate the translocation dynamics of a protein NF-kB and its biological relevance in single macrophages (RAW264.7 cells) when the cells are stimulated by E. coli lipopolysaccharide persistently. We incorporate into the computational model the signaling pathways of TLR4-MyD88-NF-kB, TNF-R and TNFa autocrine signaling and simulate heterogeneous NF-kB response in single cells, by taking into account the cell-to-cell variability in key protein copy numbers and kinetic rate constants. We present the fascinating yet puzzling NF-kB translocation dynamics as a response to different dosage of E. coli lipopolysaccharide: homogeneous oscillatory patterns of NF-kB for a large dosage and heterogeneous monotone-increasing patterns for a small dosage", "after": "This paper has been withdrawn by the author", "start_char_pos": 0, "end_char_pos": 1231 } ]
[ 0, 142, 256, 383, 680, 970 ]
1010.2865
1
This paper considers asset price dynamics of which discounted return is modeled by a multi-dimensional affine diffusion process . By analyzing the Riccati system, which is associated with the affine process via the transform formula, we fully characterize the regions of exponents in which asset price moments do not explode at any time or explode at a given time. These behaviors are closely tied to the long-term growth rate of asset price momentsas well as implied volatility asymptotics at large-time-to-maturity or at extreme strikes for any given option maturity .
This paper considers multi-dimensional affine processes with continuous sample paths . By analyzing the Riccati system, which is associated with the affine process via the transform formula, we fully characterize the regions of exponents in which exponential moments of a given process do not explode at any time or explode at a given time. In these two cases, we also compute the long-term growth rate and the explosion rate for exponential moments. These results provide a handle to study implied volatility asymptotics in models where return of stock prices are modeled by affine processes whose exponential moments do not have an explicit formula .
[ { "type": "D", "before": "asset price dynamics of which discounted return is modeled by a", "after": null, "start_char_pos": 21, "end_char_pos": 84 }, { "type": "R", "before": "diffusion process", "after": "processes with continuous sample paths", "start_char_pos": 110, "end_char_pos": 127 }, { "type": "R", "before": "asset price moments", "after": "exponential moments of a given process", "start_char_pos": 290, "end_char_pos": 309 }, { "type": "R", "before": "These behaviors are closely tied to", "after": "In these two cases, we also compute", "start_char_pos": 365, "end_char_pos": 400 }, { "type": "R", "before": "of asset price momentsas well as", "after": "and the explosion rate for exponential moments. These results provide a handle to study", "start_char_pos": 427, "end_char_pos": 459 }, { "type": "R", "before": "at large-time-to-maturity or at extreme strikes for any given option maturity", "after": "in models where return of stock prices are modeled by affine processes whose exponential moments do not have an explicit formula", "start_char_pos": 491, "end_char_pos": 568 } ]
[ 0, 129, 364 ]
1010.2865
2
This paper considers multi-dimensional affine processes with continuous sample paths. By analyzing the Riccati system, which is associated with the affine process via the transform formula, we fully characterize the regions of exponents in which exponential moments of a given process do not explode at any time or explode at a given time. In these two cases, we also compute the long-term growth rate and the explosion rate for exponential moments. These results provide a handle to study implied volatility asymptotics in models where return of stock prices are modeled by affine processes whose exponential moments do not have an explicit formula.
This paper considers multi-dimensional affine processes with continuous sample paths. By analyzing the Riccati system, which is associated with the affine process via the transform formula, we fully characterize the regions of exponents in which exponential moments of a given process do not explode at any time or explode at a given time. In these two cases, we also compute the long-term growth rate and the explosion rate for exponential moments. These results provide a handle to study implied volatility asymptotics in models where returns of stock prices are described by affine processes whose exponential moments do not have an explicit formula.
[ { "type": "R", "before": "return", "after": "returns", "start_char_pos": 537, "end_char_pos": 543 }, { "type": "R", "before": "modeled", "after": "described", "start_char_pos": 564, "end_char_pos": 571 } ]
[ 0, 85, 339, 449 ]
1010.2981
1
I apply the method of planar diagrammatic expansion to solve the problem of finding the mean spectral density of the non-Hermitian time-lagged covariance estimator for a system of i.i.d. Gaussian random variables . I confirm the result in a much simpler way using a recent conjecture about non-Hermitian random matrix modelswith rotationally-symmetric spectra. I conjecture and test numerically a form of finite-size corrections to the mean spectral density featuring the complementary error function .
I apply the method of planar diagrammatic expansion - introduced in a self-consistent way - to solve the problem of finding the mean spectral density of the Hermitian equal-time and non-Hermitian time-lagged cross-covariance estimators, for systems of Gaussian random variables with various underlying covariance functions, both writing the general equations and applying them to several toy models. The models aim at a more and more accurate description of complex financial systems - to which a lengthy introduction is given - albeit only within the Gaussian approximation .
[ { "type": "A", "before": null, "after": "- introduced in a self-consistent way -", "start_char_pos": 52, "end_char_pos": 52 }, { "type": "A", "before": null, "after": "Hermitian equal-time and", "start_char_pos": 118, "end_char_pos": 118 }, { "type": "R", "before": "covariance estimator for a system of i.i.d.", "after": "cross-covariance estimators, for systems of", "start_char_pos": 145, "end_char_pos": 188 }, { "type": "R", "before": ". I confirm the result in a much simpler way using a recent conjecture about non-Hermitian random matrix modelswith rotationally-symmetric spectra. I conjecture and test numerically a form of finite-size corrections to the mean spectral density featuring the complementary error function", "after": "with various underlying covariance functions, both writing the general equations and applying them to several toy models. The models aim at a more and more accurate description of complex financial systems - to which a lengthy introduction is given - albeit only within the Gaussian approximation", "start_char_pos": 215, "end_char_pos": 502 } ]
[ 0, 216, 362 ]
1010.2981
2
I apply the method of planar diagrammatic expansion - introduced in a self-consistent way - to solve the problem of finding the mean spectral density of the Hermitian equal-time and non-Hermitian time-lagged cross-covariance estimators, for systems of Gaussian random variables with various underlying covariance functions, both writing the general equations and applying them to several toy models. The models aim at a more and more accurate description of complex financial systems - to which a lengthy introduction is given - albeit only within the Gaussian approximation .
The random matrix theory method of planar Gaussian diagrammatic expansion is applied to find the mean spectral density of the Hermitian equal-time and non-Hermitian time-lagged cross-covariance estimators, firstly in the form of master equations for the most general multivariate Gaussian system, secondly for seven particular toy models of the true covariance function. For the simplest one of these models, the existing result is shown to be incorrect and the right one is presented, moreover its generalizations are accomplished to the exponentially-weighted moving average estimator as well as two non-Gaussian distributions, Student t and free Levy. The paper revolves around applications to financial complex systems, and the results constitute a sensitive probe of the true correlations present there .
[ { "type": "R", "before": "I apply the", "after": "The random matrix theory", "start_char_pos": 0, "end_char_pos": 11 }, { "type": "R", "before": "diagrammatic expansion - introduced in a self-consistent way - to solve the problem of finding the", "after": "Gaussian diagrammatic expansion is applied to find the", "start_char_pos": 29, "end_char_pos": 127 }, { "type": "R", "before": "for systems of Gaussian random variables with various underlying covariance functions, both writing the general equations and applying them to several toy models. The models aim at a more and more accurate description of complex financial systems - to which a lengthy introduction is given - albeit only within the Gaussian approximation", "after": "firstly in the form of master equations for the most general multivariate Gaussian system, secondly for seven particular toy models of the true covariance function. For the simplest one of these models, the existing result is shown to be incorrect and the right one is presented, moreover its generalizations are accomplished to the exponentially-weighted moving average estimator as well as two non-Gaussian distributions, Student t and free Levy. The paper revolves around applications to financial complex systems, and the results constitute a sensitive probe of the true correlations present there", "start_char_pos": 237, "end_char_pos": 574 } ]
[ 0, 399 ]
1010.3763
1
We show that the notion of induction introduced by Cassaigne, Ferenczi and Zamboni for trees of relations arising in the context of interval exchange relations can be generalised to the case of an arbitrary number of possible edge labels. We prove that the equivalence classes of its transitive closure can still be characterised via a circular order on the trees of relations in this case. We compute the cardinalities of these equivalence classes and show that the sequence of cardinalities, for a fixed number of possible edge labels, is a convolution of a Fuss-Catalan sequence. As in the original case, the equivalence classes are in bijection with a set of pseudoknot-free secondary structures arising from the study of RNA; we show that a natural subset of this set is in bijection with a set of m-clusters (in the cluster algebra sense).
We develop and study the structure of combinatorial objects arising from interval exchange transformations. We classify and enumerate these objects and further show that a natural subset of these objects is in natural bijection with a set of m-clusters (in the cluster algebra sense).
[ { "type": "R", "before": "show that the notion of induction introduced by Cassaigne, Ferenczi and Zamboni for trees of relations arising in the context of interval exchange relations can be generalised to the case of an arbitrary number of possible edge labels. We prove that the equivalence classes of its transitive closure can still be characterised via a circular order on the trees of relations in this case. We compute the cardinalities of these equivalence classes and show that the sequence of cardinalities, for a fixed number of possible edge labels, is a convolution of a Fuss-Catalan sequence. As in the original case, the equivalence classes are in bijection with a set of pseudoknot-free secondary structures arising from the study of RNA; we show that a", "after": "develop and study the structure of combinatorial objects arising from interval exchange transformations. We classify and enumerate these objects and further show that a", "start_char_pos": 3, "end_char_pos": 745 }, { "type": "R", "before": "this set is in", "after": "these objects is in natural", "start_char_pos": 764, "end_char_pos": 778 } ]
[ 0, 238, 390, 582, 730 ]
1010.3763
2
We develop and study the structure of combinatorial objects arising from interval exchange transformations . We classify and enumerate these objects and further show that a natural subset of these objects is in natural bijection with a set of m-clusters (in the cluster algebra sense) .
We develop and study the structure of combinatorial objects that are a special case of RNA secondary structures. These are generalisations of objects arising from interval exchange transformations generalising those in the Sturmian context. We represent them as labelled edge-coloured trees. We classify and enumerate them and show that a natural subset is in bijection with a set of m-clusters . Furthermore, we interpret a notion of induction used to model generalised interval exchange transformations as a composition of cluster mutations .
[ { "type": "A", "before": null, "after": "that are a special case of RNA secondary structures. These are generalisations of objects", "start_char_pos": 60, "end_char_pos": 60 }, { "type": "R", "before": ". We", "after": "generalising those in the Sturmian context. We represent them as labelled edge-coloured trees. We", "start_char_pos": 108, "end_char_pos": 112 }, { "type": "R", "before": "these objects and further", "after": "them and", "start_char_pos": 136, "end_char_pos": 161 }, { "type": "R", "before": "of these objects is in natural", "after": "is in", "start_char_pos": 189, "end_char_pos": 219 }, { "type": "R", "before": "(in the cluster algebra sense)", "after": ". Furthermore, we interpret a notion of induction used to model generalised interval exchange transformations as a composition of cluster mutations", "start_char_pos": 255, "end_char_pos": 285 } ]
[ 0, 109 ]
1010.3763
3
We develop and study the structure of combinatorial objects that are a special case of RNA secondary structures. These are generalisations of objects arising from interval exchange transformations generalising those in the Sturmian context . We represent them as labelled edge-coloured trees. We classify and enumerate them and show that a natural subset is in bijection with a set of m-clusters. Furthermore, we interpret a notion of induction used to model generalised interval exchange transformations as a composition of cluster mutations.
We develop and study the structure of combinatorial objects that are a special case of RNA secondary structures. These are generalizations of objects arising from interval exchange transformations in work of J. Cassaigne, S. Ferenczi and L. Q. Zamboni . We represent them as labelled edge-coloured trees. We classify and enumerate them and show that a natural subset is in bijection with a set of m-clusters. Furthermore, we interpret a notion of induction used to model generalised interval exchange transformations as a composition of cluster mutations.
[ { "type": "R", "before": "generalisations", "after": "generalizations", "start_char_pos": 123, "end_char_pos": 138 }, { "type": "R", "before": "generalising those in the Sturmian context", "after": "in work of J. Cassaigne, S. Ferenczi and L. Q. Zamboni", "start_char_pos": 197, "end_char_pos": 239 } ]
[ 0, 112, 241, 292, 396 ]
1010.3763
4
We develop and study the structure of combinatorial objects that are a special case of RNA secondary structures. These are generalizations of objects arising from interval exchange transformations in work of J. Cassaigne, S. Ferenczi and L. Q. Zamboni. We represent them as labelled edge-coloured trees. We classify and enumerate them and show that a natural subset is in bijection with a set of m-clusters. Furthermore, we interpret a notion of induction used to model generalised interval exchange transformations as a composition of cluster mutations .
We study a circular order on labelled, m-edge-coloured trees with k vertices, and show that the set of such trees with a fixed circular order is in bijection with the set of RNA m-diagrams of degree k, combinatorial objects which can be regarded as RNA secondary structures of a certain kind. We enumerate these sets and show that the set of trees with a fixed circular order can be characterized as an equivalence class for the transitive closure of an operation which, in the case m=3, arises as an induction in the context of interval exchange transformations .
[ { "type": "R", "before": "develop and study the structure of combinatorial objects that are a special case of RNA secondary structures. These are generalizations of objects arising from interval exchange transformations in work of J. Cassaigne, S. Ferenczi and L. Q. Zamboni. We represent them as labelled edge-coloured trees. We classify and enumerate them", "after": "study a circular order on labelled, m-edge-coloured trees with k vertices, and show that the set of such trees with a fixed circular order is in bijection with the set of RNA m-diagrams of degree k, combinatorial objects which can be regarded as RNA secondary structures of a certain kind. We enumerate these sets", "start_char_pos": 3, "end_char_pos": 334 }, { "type": "R", "before": "a natural subset is in bijection with a set of m-clusters. Furthermore, we interpret a notion of induction used to model generalised", "after": "the set of trees with a fixed circular order can be characterized as an equivalence class for the transitive closure of an operation which, in the case m=3, arises as an induction in the context of", "start_char_pos": 349, "end_char_pos": 481 }, { "type": "D", "before": "as a composition of cluster mutations", "after": null, "start_char_pos": 516, "end_char_pos": 553 } ]
[ 0, 112, 252, 303, 407 ]
1010.4216
1
We have performed Molecular Dynamics simulations of ectoine, hydroxyectoine and urea in explicit solvent. Special attention has been spent on the characteristics of the local ordering of water molecules around these compatible solutes . Our results indicate that ectoine and hydroxyectoine are able to bind more water molecules than urea on short scales. Furthermore we investigated the number and appearance of hydrogen bonds between the molecules and the solvent. The simulations show that some specific groups in the compatible solutes are able to form a pronounced ordering of the local water structure. Additionally, we have validated that the charging of the molecules is of main importance . Furthermore we show the impact of a locally varying salt concentration . Experimental results are shown which indicate a direct influence of compatible solutes on the liquid expanded-liquid condensed phase transition in DPPC monolayers. We are able to identify a variation of the local water pressure around the comaptible solutes by numerical calculations as a possible reason for an experimentally observed broadening of the phase transition .
We have performed Molecular Dynamics simulations of ectoine, hydroxyectoine and urea in explicit solvent. Special attention has been spent on the local surrounding structure of water molecules . Our results indicate that ectoine and hydroxyectoine are able to accumulate more water molecules than urea by a pronounced ordering due to hydrogen bonds. We have validated that the charging of the molecules is of main importance resulting in a well defined hydration sphere. The influence of a varying salt concentration is also investigated. Finally we present experimental results of a DPPC monolayer phase transition that validate our numerical findings .
[ { "type": "R", "before": "characteristics of the local ordering", "after": "local surrounding structure", "start_char_pos": 146, "end_char_pos": 183 }, { "type": "D", "before": "around these compatible solutes", "after": null, "start_char_pos": 203, "end_char_pos": 234 }, { "type": "R", "before": "bind", "after": "accumulate", "start_char_pos": 302, "end_char_pos": 306 }, { "type": "R", "before": "on short scales. Furthermore we investigated the number and appearance of hydrogen bonds between the molecules and the solvent. The simulations show that some specific groups in the compatible solutes are able to form", "after": "by", "start_char_pos": 338, "end_char_pos": 555 }, { "type": "R", "before": "of the local water structure. Additionally, we", "after": "due to hydrogen bonds. We", "start_char_pos": 578, "end_char_pos": 624 }, { "type": "R", "before": ". Furthermore we show the impact of a locally", "after": "resulting in a well defined hydration sphere. The influence of a", "start_char_pos": 697, "end_char_pos": 742 }, { "type": "R", "before": ". Experimental results are shown which indicate a direct influence of compatible solutes on the liquid expanded-liquid condensed phase transition in DPPC monolayers. We are able to identify a variation of the local water pressure around the comaptible solutes by numerical calculations as a possible reason for an experimentally observed broadening of the phase transition", "after": "is also investigated. Finally we present experimental results of a DPPC monolayer phase transition that validate our numerical findings", "start_char_pos": 770, "end_char_pos": 1142 } ]
[ 0, 105, 236, 354, 465, 607, 698, 935 ]
1010.4322
1
In this paper we extend the stability results of [4]}. Our utility maximization problem is defined as an essential supremum of conditional expectations of the terminal values of wealth processes, conditioned on the filtration at the stopping time \tau. The stability result , in particular, implies that in the framework of 4 , the optimal wealth at any given stopping time is stable with respect to changes in the Sharpe ratio and initial wealth. To establish our results, we extend the classical results of convex analysis to maps from L^0 to L^0. The notion of convex compactness introduced in [7] plays an important role in our analysis.
In this paper we extend the stability results of [4]}. Our utility maximization problem is defined as an essential supremum of conditional expectations of the terminal values of wealth processes, conditioned on the filtration at the stopping time \tau. As a corollary, the stability result implies that in the framework of \mbox{%DIFAUXCMD MR2438002 , the optimal wealth at any given stopping time is stable with respect to changes in the Sharpe ratio and initial wealth. To establish our results, we extend the classical results of convex analysis to maps from L^0 to L^0. The notion of convex compactness introduced in [7] plays an important role in our analysis.
[ { "type": "R", "before": "The stability result , in particular,", "after": "As a corollary, the stability result", "start_char_pos": 253, "end_char_pos": 290 }, { "type": "R", "before": "4", "after": "\\mbox{%DIFAUXCMD MR2438002", "start_char_pos": 324, "end_char_pos": 325 } ]
[ 0, 54, 252, 447, 549 ]
1010.4384
1
We model the dynamics of asset prices and associated derivatives by consideration of the dynamics of the conditional probability density process for the value of an asset at some specified time in the future. In the case where the asset is driven by Brownian motion, an associated "master equation" for the dynamics of the conditional probability density is derived and expressed in integral form. By a "model" for the conditional density process we mean a solution to the master equation along with the specification of (a) the initial density, and (b) the volatility structure of the density. The volatility structure is assumed at any time and for each value of the argument of the density to be a functional of the history of the density up to that time . This functional determines the model for the conditional density . In practice one specifies the functional modulo sufficient parametric freedom to allow for the input of additional option data apart from that implicit in the initial density. The scheme is sufficiently flexible to allow for the input of various types of data depending on the nature of the options market and the class of valuation problem being undertaken. Various examples are studied in detail, with exact solutions provided in some cases.
We model the dynamics of asset prices and associated derivatives by consideration of the dynamics of the conditional probability density process for the value of an asset at some specified time in the future. In the case where the price process is driven by Brownian motion, an associated "master equation" for the dynamics of the conditional probability density is derived and expressed in integral form. By a "model" for the conditional density process we mean a solution to the master equation along with the specification of (a) the initial density, and (b) the volatility structure of the density. The volatility structure is assumed at any time and for each value of the argument of the density to be a functional of the history of the density up to that time . In practice one specifies the functional modulo sufficient parametric freedom to allow for the input of additional option data apart from that implicit in the initial density. The scheme is sufficiently flexible to allow for the input of various types of data depending on the nature of the options market and the class of valuation problem being undertaken. Various examples are studied in detail, with exact solutions provided in some cases.
[ { "type": "R", "before": "asset", "after": "price process", "start_char_pos": 231, "end_char_pos": 236 }, { "type": "D", "before": ". This functional determines the model for the conditional density", "after": null, "start_char_pos": 758, "end_char_pos": 824 } ]
[ 0, 208, 397, 594, 759, 826, 1002, 1185 ]
1010.4735
1
Nested sampling is a technique developed to explore probability distributions localised in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques including parallel tempering. In this paper we apply the nested sampling algorithm to the problem of protein folding in a force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. The topology of the protein molecule emerges as a major determinant of the shape of the energy landscape . The algorithm was tested with protein G. The tertiary structure of the protein was predicted to a reasonably high level of accuracy. The best conformation found had an RMSD of 3.12A from the native structure. Two other proteins, chymotrypsin inhibitor 2 and Src tyrosine kinase SH3 domain, were also tested, with the best conformations found having RMSD 4.75A and 5.33A from their respective native structures . The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output.
Nested sampling is a technique developed to explore probability distributions localised in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques , including parallel tempering. In this paper we apply the nested sampling algorithm to the problem of protein folding in a Go-type force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. We compare our results for protein G to those obtained using parallel tempering with the same model. The topology of the protein molecule emerges as a major determinant of the shape of the energy landscape . The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 363, "end_char_pos": 363 }, { "type": "A", "before": null, "after": "Go-type", "start_char_pos": 486, "end_char_pos": 486 }, { "type": "A", "before": null, "after": "We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. We compare our results for protein G to those obtained using parallel tempering with the same model.", "start_char_pos": 941, "end_char_pos": 941 }, { "type": "D", "before": ". The algorithm was tested with protein G. The tertiary structure of the protein was predicted to a reasonably high level of accuracy. The best conformation found had an RMSD of 3.12A from the native structure. Two other proteins, chymotrypsin inhibitor 2 and Src tyrosine kinase SH3 domain, were also tested, with the best conformations found having RMSD 4.75A and 5.33A from their respective native structures", "after": null, "start_char_pos": 1047, "end_char_pos": 1458 } ]
[ 0, 142, 256, 393, 616, 818, 940, 1048, 1181, 1257, 1460 ]
1010.4735
2
Nested sampling is a technique developed to explore probability distributions localised in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering . In this paper we apply the nested sampling algorithm to the problem of protein folding in a Go-type force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used . We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. We compare our results for protein G to those obtained using parallel tempering with the same model. The topology of the protein molecule emerges as a major determinant of the shape of the energy landscape. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output .
Nested sampling is a Bayesian sampling technique developed to explore probability distributions lo- calised in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algo- rithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering (replica exchange) . In this paper we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Go-type force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used .
[ { "type": "A", "before": null, "after": "Bayesian sampling", "start_char_pos": 21, "end_char_pos": 21 }, { "type": "R", "before": "localised", "after": "lo- calised", "start_char_pos": 79, "end_char_pos": 88 }, { "type": "A", "before": null, "after": "The nested sampling algo- rithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output.", "start_char_pos": 258, "end_char_pos": 258 }, { "type": "A", "before": null, "after": "(replica exchange)", "start_char_pos": 395, "end_char_pos": 395 }, { "type": "R", "before": "apply", "after": "describe a parallel implementation of", "start_char_pos": 415, "end_char_pos": 420 }, { "type": "A", "before": null, "after": "and its application", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "A", "before": null, "after": "We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2.", "start_char_pos": 629, "end_char_pos": 629 }, { "type": "D", "before": ". We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. We compare our results for protein G to those obtained using parallel tempering with the same model. The topology of the protein molecule emerges as a major determinant of the shape of the energy landscape. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output", "after": null, "start_char_pos": 953, "end_char_pos": 1604 } ]
[ 0, 143, 257, 397, 628, 831, 954, 1187, 1288, 1394 ]
1010.4920
1
We study the problem of channel pairing and power allocation in a multi-channel , multi-hop relay network to enhance the end-to-end data rate. OFDM-based relays are used as an illustrative example, and the amplify-and-forward and decode-and-forward relaying strategies are considered. Given fixed power allocation to the OFDM subcarriers, we observe that a sorted-SNR subcarrier pairing strategy is data-rate optimal, where each relay pairs its incoming and outgoing subcarriers by their SNR order. For the joint optimization of subcarrier pairing and power allocation , we show that it is optimal to separately consider the two subproblems, for both individual and total power constraints . This separation principle significantly reduces the computational complexity in finding the jointly optimal solution. We further establish the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution , which allows simple implementation of optimal subcarrier pairing at the relays. Simulation results are presented to demonstrate the performance gain of the jointly optimal solution over some suboptimal alternatives .
We study the problem of channel pairing and power allocation in a multi-channel multi-hop relay network to enhance the end-to-end data rate. Both amplify-and-forward and decode-and-forward relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints , we show that the problem can be decomposed into two separate subproblems solved independently . This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution . It significantly reduces the computational complexity in finding the jointly optimal solution. The solution for optimizing power allocation is also provided. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives . It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 80, "end_char_pos": 81 }, { "type": "R", "before": "OFDM-based relays are used as an illustrative example, and the", "after": "Both", "start_char_pos": 143, "end_char_pos": 205 }, { "type": "R", "before": "OFDM subcarriers, we observe that", "after": "channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and", "start_char_pos": 321, "end_char_pos": 354 }, { "type": "R", "before": "subcarrier", "after": "channel", "start_char_pos": 368, "end_char_pos": 378 }, { "type": "R", "before": "data-rate", "after": "sum-rate", "start_char_pos": 399, "end_char_pos": 408 }, { "type": "R", "before": "subcarriers", "after": "channels", "start_char_pos": 467, "end_char_pos": 478 }, { "type": "R", "before": "subcarrier", "after": "channel", "start_char_pos": 529, "end_char_pos": 539 }, { "type": "A", "before": null, "after": "under both total and individual power constraints", "start_char_pos": 569, "end_char_pos": 569 }, { "type": "R", "before": "it is optimal to separately consider the two subproblems, for both individual and total power constraints", "after": "the problem can be decomposed into two separate subproblems solved independently", "start_char_pos": 585, "end_char_pos": 690 }, { "type": "R", "before": "significantly reduces the computational complexity in finding the jointly optimal solution. We further establish the", "after": "is established by observing the", "start_char_pos": 719, "end_char_pos": 835 }, { "type": "R", "before": ", which allows simple implementation of optimal subcarrier pairing at the relays. Simulation results are presented to demonstrate the", "after": ". It significantly reduces the computational complexity in finding the jointly optimal solution. The solution for optimizing power allocation is also provided. Numerical results are provided to demonstrate substantial", "start_char_pos": 927, "end_char_pos": 1060 }, { "type": "A", "before": null, "after": ". It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples", "start_char_pos": 1144, "end_char_pos": 1144 } ]
[ 0, 142, 284, 498, 692, 810, 1008 ]
1010.4920
2
We study the problem of channel pairing and power allocation in a multi-channel multi-hop relay network to enhance the end-to-end data rate. Both amplify-and-forward and decode-and-forward relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decomposed into two separate subproblems solved independently . This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. The solution for optimizing power allocation is also provided . Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately . This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided , as well as an asymptotically optimal solution for AF relaying . Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
[ { "type": "R", "before": "multi-channel multi-hop", "after": "multichannel multihop", "start_char_pos": 66, "end_char_pos": 89 }, { "type": "A", "before": null, "after": "(AF)", "start_char_pos": 166, "end_char_pos": 166 }, { "type": "A", "before": null, "after": "(DF)", "start_char_pos": 190, "end_char_pos": 190 }, { "type": "R", "before": "decomposed into two separate subproblems solved independently", "after": "decoupled into two subproblems solved separately", "start_char_pos": 679, "end_char_pos": 740 }, { "type": "A", "before": null, "after": "It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains.", "start_char_pos": 988, "end_char_pos": 988 }, { "type": "A", "before": null, "after": "for DF relaying", "start_char_pos": 1034, "end_char_pos": 1034 }, { "type": "A", "before": null, "after": ", as well as an asymptotically optimal solution for AF relaying", "start_char_pos": 1052, "end_char_pos": 1052 } ]
[ 0, 140, 226, 528, 742, 892, 987, 1196, 1372 ]
1010.5154
1
Our study shows that many firms would accumulate at zero output level (namely, Bankruptcy status) if a competitive market reaches full employment (namely, those people who should obtain employment have obtained employment). As a result, appearance of economic crisis is determined by two points; that is, (a). Stock market approaches perfect competition; (b). Society reaches full employment. The empirical research of these two points would lead to early warning of economic crisis .
Our study shows that many firms would accumulate at zero output level (namely, Bankruptcy status) if a perfectly competitive market reaches full employment (namely, those people who should obtain employment have obtained employment). As a result, appearance of economic crisis is determined by two points; that is, (a). Stock market approaches perfect competition; (b). Society reaches full employment. The empirical research of these two points would lead to early warning of economic crisis . Moreover, it is a surprise that the state of economic crisis would be a feasible equilibrium within the framework of the Arrow-Debreu model. That means that we can not understand the origin of economic crisis within the framework of modern economics, for example, the general equilibrium theory .
[ { "type": "A", "before": null, "after": "perfectly", "start_char_pos": 103, "end_char_pos": 103 }, { "type": "A", "before": null, "after": ". Moreover, it is a surprise that the state of economic crisis would be a feasible equilibrium within the framework of the Arrow-Debreu model. That means that we can not understand the origin of economic crisis within the framework of modern economics, for example, the general equilibrium theory", "start_char_pos": 484, "end_char_pos": 484 } ]
[ 0, 224, 296, 310, 355, 393 ]
1010.5808
1
The paper is concerned with the problem of existence of solutions for the Heath-Jarrow-Morton equation with linear volatility. Necessary conditions and sufficient conditions for the existence of semigroup solutions and strong solutions are provided. It is shown that the key role is played by the logarithmic growth conditions of the Laplace exponent.
The paper is concerned with the problem of existence of solutions for the Heath-Jarrow-Morton equation with linear volatility. Necessary conditions and sufficient conditions for the existence of weak solutions and strong solutions are provided. It is shown that the key role is played by the logarithmic growth conditions of the Laplace exponent.
[ { "type": "R", "before": "semigroup", "after": "weak", "start_char_pos": 195, "end_char_pos": 204 } ]
[ 0, 126, 249 ]
1011.0765
1
Numerous experiments demonstrate a high level of promiscuity and structural disorder URLanismal proteomes. Here we ask the question what makes a protein promiscuous and structurally disordered. We predict that multi-scale correlations of amino acid positions within protein sequences statistically enhance the propensity for promiscuous intra- and inter-protein binding. We show that sequence correlations between amino acids of the same type are statistically enhanced in structurally disordered proteins and in hubs URLanismal proteomes. We also show that structurally disordered proteins possess a significantly higher degree of sequence order than structurally ordered proteins. We develop an analytical theory for this effect and predict the robustness of our conclusions with respect to the amino acid composition and the form of the microscopic potential between the interacting sequences. Our findings have implications for understanding molecular mechanisms of protein aggregation diseases induced by the extension of sequence repeats.
Numerous experiments demonstrate a high level of promiscuity and structural disorder URLanismal proteomes. Here we ask the question what makes a protein promiscuous , i.e., prone to non-specific interactions, and structurally disordered. We predict that multi-scale correlations of amino acid positions within protein sequences statistically enhance the propensity for promiscuous intra- and inter-protein binding. We show that sequence correlations between amino acids of the same type are statistically enhanced in structurally disordered proteins and in hubs URLanismal proteomes. We also show that structurally disordered proteins possess a significantly higher degree of sequence order than structurally ordered proteins. We develop an analytical theory for this effect and predict the robustness of our conclusions with respect to the amino acid composition and the form of the microscopic potential between the interacting sequences. Our findings have implications for understanding molecular mechanisms of protein aggregation diseases induced by the extension of sequence repeats.
[ { "type": "A", "before": null, "after": ", i.e., prone to non-specific interactions,", "start_char_pos": 165, "end_char_pos": 165 } ]
[ 0, 106, 194, 371, 540, 683, 897 ]
1011.1234
1
The mathematical problem of the static storage optimization is formulated and solved by means of a variational analysis. The solution obtained in implicit form is shedding light on the most important features of the optimal exercise strategy. We show how the solution depends on different constraint types including carry cost and cycling constraint. We investigate the relation between intrinsic and stochastic solutions. In particular we give another proof that the stochastic problem has a "bang-bang" optimal exercise strategy. We also show why the optimal stochastic exercise decision is always close to the intrinsic one. In the second half we develop a perturbation analysis to solve the stochastic optimization problem. The obtained approximate solution allows us to estimate the time value , which arises due to the stochastic nature of the price process . In particular we find an answer to rather academic question of asymptotic time value for the mean reversion parameter approaching zero or infinity. We also investigate the differences between swing and storage problems. The analytical results are compared with numerical valuations and found to be in a good agreement.
The mathematical problem of the static storage optimisation is formulated and solved by means of a variational analysis. The solution obtained in implicit form is shedding light on the most important features of the optimal exercise strategy. We show how the solution depends on different constraint types including carry cost and cycling constraint. We investigate the relation between intrinsic and stochastic solutions. In particular we give another proof that the stochastic problem has a "bang-bang" optimal exercise strategy. We also show why the optimal stochastic exercise decision is always close to the intrinsic one. In the second half we develop a perturbation analysis to solve the stochastic optimisation problem. The obtained approximate solution allows us to estimate the time value of the storage option . In particular we find an answer to rather academic question of asymptotic time value for the mean reversion parameter approaching zero or infinity. We also investigate the differences between swing and storage problems. The analytical results are compared with numerical valuations and found to be in a good agreement.
[ { "type": "R", "before": "optimization", "after": "optimisation", "start_char_pos": 47, "end_char_pos": 59 }, { "type": "R", "before": "optimization", "after": "optimisation", "start_char_pos": 706, "end_char_pos": 718 }, { "type": "R", "before": ", which arises due to the stochastic nature of the price process", "after": "of the storage option", "start_char_pos": 799, "end_char_pos": 863 } ]
[ 0, 120, 242, 350, 422, 531, 627, 727, 865, 1013, 1085 ]
1011.2313
1
Information about primary user (PU) location is crucial in enabling several key capabilities in dynamic spectrum access networks, including improved spatio-temporal sensing, intelligent location-aware routing, as well as aiding spectrum policy enforcement. Compared to other proposed non-interactive localization algorithms, the weighted centroid localization (WCL) scheme uses only received signal strength information, which makes it simple and robust to variations in the propagation environment. In contrast to prior work, which focused mainly on proposing algorithmic variations and verifying their performance through simulations, in this paper we present the first theoretical framework for WCL performance analysis in terms of its localization error distribution parameterized by node density, node placement, shadowing variance and correlation distance. Using this analysis, we quantify the robustness of WCL to various physical conditions and provide guidelines, such as node placement, for practical deployment of WCL. We also propose a practical method for employing WCL through a distributed cluster-based implementation. This approach achieves comparable accuracy with its centralized counterpart, and greatly reduces transmit power consumption .
Information about primary transmitter location is crucial in enabling several key capabilities in dynamic spectrum access networks, including improved spatio-temporal sensing, intelligent location-aware routing, as well as aiding spectrum policy enforcement. Compared to other proposed non-interactive localization algorithms, the weighted centroid localization (WCL) scheme uses only the received signal strength information, which makes it simple to implement and robust to variations in the propagation environment. In this paper we present the first theoretical framework for WCL performance analysis in terms of its localization error distribution parameterized by node density, node placement, shadowing variance and correlation distance. Using this analysis, we quantify the robustness of WCL to various physical conditions and provide design guidelines, such as node placement, for the practical deployment of WCL. We also propose a practical method for employing WCL through a distributed cluster-based implementation. This approach achieves comparable accuracy with its centralized counterpart, and greatly reduces total transmit power .
[ { "type": "R", "before": "user (PU)", "after": "transmitter", "start_char_pos": 26, "end_char_pos": 35 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 383, "end_char_pos": 383 }, { "type": "A", "before": null, "after": "to implement", "start_char_pos": 444, "end_char_pos": 444 }, { "type": "D", "before": "contrast to prior work, which focused mainly on proposing algorithmic variations and verifying their performance through simulations, in", "after": null, "start_char_pos": 505, "end_char_pos": 641 }, { "type": "A", "before": null, "after": "design", "start_char_pos": 963, "end_char_pos": 963 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1004, "end_char_pos": 1004 }, { "type": "R", "before": "transmit power consumption", "after": "total transmit power", "start_char_pos": 1236, "end_char_pos": 1262 } ]
[ 0, 256, 501, 864, 1033, 1138 ]
1011.2313
2
Information about primary transmitter location is crucial in enabling several key capabilities in dynamic spectrum access networks, including improved spatio-temporal sensing, intelligent location-aware routing, as well as aiding spectrum policy enforcement. Compared to other proposed non-interactive localization algorithms, the weighted centroid localization (WCL) scheme uses only the received signal strength information, which makes it simple to implement and robust to variations in the propagation environment. In this paper we present the first theoretical framework for WCL performance analysis in terms of its localization error distribution parameterized by node density, node placement, shadowing variance and correlation distance . Using this analysis, we quantify the robustness of WCL to various physical conditions and provide design guidelines, such as node placement , for the practical deployment of WCL. We also propose a practical method for employing WCL through a distributed cluster-based implementation. This approach achieves comparable accuracy with its centralized counterpart , and greatly reduces total transmit power .
Information about primary transmitter location is crucial in enabling several key capabilities in cognitive radio networks, including improved spatio-temporal sensing, intelligent location-aware routing, as well as aiding spectrum policy enforcement. Compared to other proposed non-interactive localization algorithms, the weighted centroid localization (WCL) scheme uses only the received signal strength information, which makes it simple to implement and robust to variations in the propagation environment. In this paper we present the first theoretical framework for WCL performance analysis in terms of its localization error distribution parameterized by node density, node placement, shadowing variance , correlation distance and inaccuracy of sensor node positioning . Using this analysis, we quantify the robustness of WCL to various physical conditions and provide design guidelines, such as node placement and spacing , for the practical deployment of WCL. We also propose a power-efficient method for implementing WCL through a distributed cluster-based algorithm, that achieves comparable accuracy with its centralized counterpart .
[ { "type": "R", "before": "dynamic spectrum access", "after": "cognitive radio", "start_char_pos": 98, "end_char_pos": 121 }, { "type": "R", "before": "and correlation distance", "after": ", correlation distance and inaccuracy of sensor node positioning", "start_char_pos": 719, "end_char_pos": 743 }, { "type": "A", "before": null, "after": "and spacing", "start_char_pos": 886, "end_char_pos": 886 }, { "type": "R", "before": "practical method for employing", "after": "power-efficient method for implementing", "start_char_pos": 944, "end_char_pos": 974 }, { "type": "R", "before": "implementation. This approach", "after": "algorithm, that", "start_char_pos": 1015, "end_char_pos": 1044 }, { "type": "D", "before": ", and greatly reduces total transmit power", "after": null, "start_char_pos": 1107, "end_char_pos": 1149 } ]
[ 0, 258, 518, 925, 1030 ]
1011.2827
1
This paper investigates the impact of parameter uncertainty on capital estimate in the well-known extended Loss Given Default (LGD) model with systematic dependence between default and recovery . We demonstrate how the uncertainty can be quantified using the full posterior distribution of model parameters obtained from Bayesian inference via Markov chain Monte Carlo (MCMC). Results show that the parameter uncertainty and its impact on capital can be very significant. We have also quantified the effect of diversification for a finite number of borrowers in comparison with the infinitely granular portfolio .
It is a well known fact that recovery rates tend to go down when the number of defaults goes up in economic downturns. We demonstrate how the loss given default model with the default and recovery dependent via the latent systematic risk factor can be estimated using Bayesian inference methodology and Markov chain Monte Carlo method. This approach is very convenient for joint estimation of all model parameters and latent systematic factors. Moreover, all relevant uncertainties are easily quantified. Typically available data are annual averages of defaults and recoveries and thus the datasets are small and parameter uncertainty is significant. In this case Bayesian approach is superior to the maximum likelihood method that relies on a large sample limit Gaussian approximation for the parameter uncertainty. As an example, we consider a homogeneous portfolio with one latent factor. However, the approach can be easily extended to deal with non-homogenous portfolios and several latent factors .
[ { "type": "R", "before": "This paper investigates the impact of parameter uncertainty on capital estimate in", "after": "It is a well known fact that recovery rates tend to go down when the number of defaults goes up in economic downturns. We demonstrate how the loss given default model with", "start_char_pos": 0, "end_char_pos": 82 }, { "type": "D", "before": "well-known extended Loss Given Default (LGD) model with systematic dependence between", "after": null, "start_char_pos": 87, "end_char_pos": 172 }, { "type": "R", "before": ". We demonstrate how the uncertainty can be quantified using the full posterior distribution of model parameters obtained from Bayesian inference via", "after": "dependent via the latent systematic risk factor can be estimated using Bayesian inference methodology and", "start_char_pos": 194, "end_char_pos": 343 }, { "type": "R", "before": "(MCMC). Results show that the parameter uncertainty and its impact on capital can be very significant. We have also quantified the effect of diversification for a finite number of borrowers in comparison with the infinitely granular portfolio", "after": "method. This approach is very convenient for joint estimation of all model parameters and latent systematic factors. Moreover, all relevant uncertainties are easily quantified. Typically available data are annual averages of defaults and recoveries and thus the datasets are small and parameter uncertainty is significant. In this case Bayesian approach is superior to the maximum likelihood method that relies on a large sample limit Gaussian approximation for the parameter uncertainty. As an example, we consider a homogeneous portfolio with one latent factor. However, the approach can be easily extended to deal with non-homogenous portfolios and several latent factors", "start_char_pos": 369, "end_char_pos": 611 } ]
[ 0, 195, 376, 471 ]
1011.3181
1
A comparative classification scheme provides a good basis for several approaches to understand proteins, including prediction of relations between their structure and biological function. However, it remains a challenge to combine a classification scheme that describes a protein starting from its secondary structures and often involves direct human involvement, with an atomary level Physics based approach where a protein is fundamentally viewed as not much more than an ensemble of mutually interacting carbon, hydrogen, oxygen and nitrogen atoms. It appears that in order to bridge these two complementary approaches to proteins, conceptually novel tools need to be introduced. Here we explain how the geometrical shape of secondary superstructures such as helix-loop-helix motifs, and even entire folded proteins , can be described analytically in terms of a single explicit elementary function that is familiar from nonlinear physical systems where it is known as the kink-soliton. Our approach enables the conversion of hierarchical structural information into a quantitative form that allows for a folded protein to be characterized in terms of a small number of global parameters that are in principle computable from atomary level considerations. As an example of the feasibility of our approach we describe in detail how the native fold of the myoglobin 1M6C emerges from a combination of kink-solitons with a very high atomary level accuracy.
A comparative classification scheme provides a good basis for several approaches to understand proteins, including prediction of relations between their structure and biological function. But it remains a challenge to combine a classification scheme that describes a protein starting from its URLanized secondary structures and often involves direct human involvement, with an atomary level Physics based approach where a protein is fundamentally nothing more than an ensemble of mutually interacting carbon, hydrogen, oxygen and nitrogen atoms. In order to bridge these two complementary approaches to proteins, conceptually novel tools need to be introduced. Here we explain how the geometrical shape of entire folded proteins can be described analytically in terms of a single explicit elementary function that is familiar from nonlinear physical systems where it is known as the kink-soliton. Our approach enables the conversion of hierarchical structural information into a quantitative form that allows for a folded protein to be characterized in terms of a small number of global parameters that are in principle computable from atomary level considerations. As an example we describe in detail how the native fold of the myoglobin 1M6C emerges from a combination of kink-solitons with a very high atomary level accuracy. We also verify that our approach describes longer loops and loops connecting \alpha-helices with \beta-strands, with same overall accuracy.
[ { "type": "R", "before": "However,", "after": "But", "start_char_pos": 188, "end_char_pos": 196 }, { "type": "A", "before": null, "after": "URLanized", "start_char_pos": 298, "end_char_pos": 298 }, { "type": "R", "before": "viewed as not much", "after": "nothing", "start_char_pos": 443, "end_char_pos": 461 }, { "type": "R", "before": "It appears that in", "after": "In", "start_char_pos": 553, "end_char_pos": 571 }, { "type": "D", "before": "secondary superstructures such as helix-loop-helix motifs, and even", "after": null, "start_char_pos": 729, "end_char_pos": 796 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 820, "end_char_pos": 821 }, { "type": "D", "before": "of the feasibility of our approach", "after": null, "start_char_pos": 1273, "end_char_pos": 1307 }, { "type": "A", "before": null, "after": "accuracy. We also verify that our approach describes longer loops and loops connecting \\alpha-helices with \\beta-strands, with same overall", "start_char_pos": 1447, "end_char_pos": 1447 } ]
[ 0, 187, 552, 683, 989, 1258 ]
1011.3685
1
This paper studies multidimensional dynamic risk measure induced by conditional g-expectation . A notion of multidimensional g-expectation is proposed to provide a multidimensional version of nonlinear expectations. By a technical result on explicit expressions for the comparison theorem, uniqueness theorem and viability on a rectangle of solutions to multidimensional backward stochastic differential equations, some necessary and sufficient conditions are given for the constancy, monotonicity, positivity, homogeneity and translatability properties of multidimensional conditional g-expectation and multidimensional dynamic risk measure ; we prove that a multidimensional dynamic g-risk measure is nonincreasingly convex if and only if the generator g satisfies a quasi-monotone increasingly convex condition. A general dual representation is also given for multidimensional dynamic convex g-risk measure in which the penalty term is expressed more precisely. Several examples are presented to show how this multidimensional approach is applied to a class of agent-based model and to the problem of risk allocation .
This paper deals with multidimensional dynamic risk measures induced by conditional g-expectations . A notion of multidimensional g-expectation is proposed to provide a multidimensional version of nonlinear expectations. By a technical result on explicit expressions for the comparison theorem, uniqueness theorem and viability on a rectangle of solutions to multidimensional backward stochastic differential equations, some necessary and sufficient conditions are given for the constancy, monotonicity, positivity, homogeneity and translatability properties of multidimensional conditional g-expectations and multidimensional dynamic risk measures ; we prove that a multidimensional dynamic g-risk measure is nonincreasingly convex if and only if the generator g satisfies a quasi-monotone increasingly convex condition. A general dual representation is given for the multidimensional dynamic convex g-risk measure in which the penalty term is expressed more precisely. Similarly to the one dimensional case, a sufficient condition for a multidimensional dynamic risk measure to be a g-expectation is also explored. As to applications, we show how this multidimensional approach can be applied to measure the insolvency risk of a firm with interacted subsidiaries .
[ { "type": "R", "before": "studies", "after": "deals with", "start_char_pos": 11, "end_char_pos": 18 }, { "type": "R", "before": "measure", "after": "measures", "start_char_pos": 49, "end_char_pos": 56 }, { "type": "R", "before": "g-expectation", "after": "g-expectations", "start_char_pos": 80, "end_char_pos": 93 }, { "type": "R", "before": "g-expectation", "after": "g-expectations", "start_char_pos": 586, "end_char_pos": 599 }, { "type": "R", "before": "measure", "after": "measures", "start_char_pos": 634, "end_char_pos": 641 }, { "type": "R", "before": "also given for", "after": "given for the", "start_char_pos": 848, "end_char_pos": 862 }, { "type": "R", "before": "Several examples are presented to", "after": "Similarly to the one dimensional case, a sufficient condition for a multidimensional dynamic risk measure to be a g-expectation is also explored. As to applications, we", "start_char_pos": 965, "end_char_pos": 998 }, { "type": "R", "before": "is applied to a class of agent-based model and to the problem of risk allocation", "after": "can be applied to measure the insolvency risk of a firm with interacted subsidiaries", "start_char_pos": 1039, "end_char_pos": 1119 } ]
[ 0, 95, 215, 643, 814, 964 ]