{"doc_id": "0809.0773", "revision_depth": "1", "before_revision": "Noise can significantly influence the behaviour of biochemical reaction networks. While ordinary differential equation (ODE) models remain the conceptual framework for modelling many cellular processes, specific situations demand stochastic models to capture the influence of noise. The most common formulation of stochastic models for biochemical networks is the chemical master equation (CME). While stochastic simulations are a practical way to realise the CME, analytical approximations offer more insights into the influence of noise on cell function. Towards this end, the recently developed two-moment approximation (2MA) is a promising approach, accounting for the coupling between the means and ( co)variances. It is this influence of ( co)variance on the mean behaviour, which cannot be represented by conventional ODE models. We extend the derivation of the 2MA by establishing two advances to previous efforts: a) relative concentrations and b) non-elementary reactions. Both aspects are important in systems biology where one is often forced to aggregate elementary reactions into single stepreactions, and some models use relative concentrations (as a ratio of two concentrations or copy numbers). Previous derivations assume elementary reactions and rely on concentrations defined as copy numbers per unit volume. We demonstrate this 2MA approach with an application to the well established fission yeast cell cycle model. The simulations of the 2MA model show oscillatory behaviour near the M / G checkpoint. The behaviour around this bifurcation point is significantly different from that predicted by the ODE model. What this suggests is that the 2MA approach can reveal hidden dynamics near critical points .", "after_revision": " While ordinary differential equations (ODEs) form the conceptual framework for modelling many cellular processes, specific situations demand stochastic models to capture the influence of noise. The most common formulation of stochastic models for biochemical networks is the chemical master equation (CME). While stochastic simulations are a practical way to realise the CME, analytical approximations offer more insight into the influence of noise . Towards that end, the two-moment approximation (2MA) is a promising addition to the established analytical approaches including the chemical Langevin equation (CLE) and the related linear noise approximation (LNA). The 2MA approach directly tracks the mean and ( co)variance which are coupled in general. This coupling is not obvious in CME and CLE and ignored by LNA and conventional ODE models. We extend previous derivations of 2MA by allowing a) non-elementary reactions and b) relative concentrations. Often, several elementary reactions are approximated by a single step. Furthermore, practical situations often require the use relative concentrations . We investigate the applicability of the 2MA approach to the well established fission yeast cell cycle model. Our analytical model reproduces the clustering of cycle times observed in experiments. This is explained through multiple resettings of MPF, caused by the coupling between mean and (co)variance, near the G2 / M transition .", "edit_actions": [{"type": "D", "before": "Noise can significantly influence the behaviour of biochemical reaction networks.", "after": null, "start_char_pos": 0, "end_char_pos": 81}, {"type": "R", "before": "equation (ODE) models remain", "after": "equations (ODEs) form", "start_char_pos": 110, "end_char_pos": 138}, {"type": "R", "before": "insights", "after": "insight", "start_char_pos": 502, "end_char_pos": 510}, {"type": "R", "before": "on cell function. Towards this", "after": ". Towards that", "start_char_pos": 539, "end_char_pos": 569}, {"type": "D", "before": "recently developed", "after": null, "start_char_pos": 579, "end_char_pos": 597}, {"type": "R", "before": "approach, accounting for the coupling between the means and", "after": "addition to the established analytical approaches including the chemical Langevin equation (CLE) and the related linear noise approximation (LNA). The 2MA approach directly tracks the mean and", "start_char_pos": 644, "end_char_pos": 703}, {"type": "D", "before": "co)variances. It is this influence of (", "after": null, "start_char_pos": 706, "end_char_pos": 745}, {"type": "R", "before": "on the mean behaviour, which cannot be represented by", "after": "which are coupled in general. This coupling is not obvious in CME and CLE and ignored by LNA and", "start_char_pos": 758, "end_char_pos": 811}, {"type": "R", "before": "the derivation of the", "after": "previous derivations of", "start_char_pos": 847, "end_char_pos": 868}, {"type": "R", "before": "establishing two advances to previous efforts: a) relative concentrations", "after": "allowing a) non-elementary reactions", "start_char_pos": 876, "end_char_pos": 949}, {"type": "R", "before": "non-elementary reactions. Both aspects are important in systems biology where one is often forced to aggregate elementary reactions into single stepreactions, and some models", "after": "relative concentrations. Often, several elementary reactions are approximated by a single step. Furthermore, practical situations often require the", "start_char_pos": 957, "end_char_pos": 1131}, {"type": "R", "before": "(as a ratio of two concentrations or copy numbers). Previous derivations assume elementary reactions and rely on concentrations defined as copy numbers per unit volume. We demonstrate this", "after": ". We investigate the applicability of the", "start_char_pos": 1160, "end_char_pos": 1348}, {"type": "D", "before": "with an application", "after": null, "start_char_pos": 1362, "end_char_pos": 1381}, {"type": "R", "before": "The simulations of the 2MA model show oscillatory behaviour near the M", "after": "Our analytical model reproduces the clustering of cycle times observed in experiments. This is explained through multiple resettings of MPF, caused by the coupling between mean and (co)variance, near the G2", "start_char_pos": 1438, "end_char_pos": 1508}, {"type": "R", "before": "G checkpoint. The behaviour around this bifurcation point is significantly different from that predicted by the ODE model. What this suggests is that the 2MA approach can reveal hidden dynamics near critical points", "after": "M transition", "start_char_pos": 1511, "end_char_pos": 1725}], "sents_char_pos": [0, 81, 282, 395, 556, 719, 836, 982, 1211, 1328, 1437, 1524, 1633]} {"doc_id": "0901.4904", "revision_depth": "2", "before_revision": "A continuum model has been posited for the global analysis of data pertaining to the semantic network of a complex operating system ( free and open-source software ) . While the frequency distributions of links in both the in-directed and out-directed dependency networks of this system follow Zipf's law for the intermediate nodes, the richest nodes, as well as the weakest nodes, deviate from this trend , and exhibit a saturation behaviour arising from the finiteness of semantic possibilities in the network. To preserve uniqueness of operations in the network, the nodes obey an \"exclusion principle\", with no two nodes being exactly alike in their functionality. The parameters related to finite-size behaviour make a quantitative distinction between the two directed networks of incoming and outgoing links . Dynamic evolution , over two generations of free software releases , shows that the saturation properties of the in-directed and out-directed networks are oppositely affected . For the out-degree distribution, whose top nodes form the foundation of the entire network, the initial condition for a dynamic model, evolving towards a steady scale-free frequency distribution of nodes, determines the finite limit to the number of top nodes that the mature out-directed network can have .", "after_revision": "A continuum model has been proposed to fit the data pertaining to the directed networks in free and open-source software . While the degree distributions of links in both the in-directed and out-directed dependency networks follow Zipf's law for the intermediate nodes, the most richly linked nodes, as well as the most poorly linked nodes, deviate from this trend and exhibit finite-size effects. The finite-size parameters make a quantitative distinction between the in-directed and out-directed networks . Dynamic evolution of free software releases shows that the finite-size properties of the in-directed and out-directed networks are opposite in nature . For the out-degree distribution, the initial condition for a dynamic evolution also corresponds to the limiting count of rich nodes that the mature out-directed network can have . The number of nodes contributing out-directed links grows with each passing generation of software release, but this growth ultimately saturates towards a finite value due to the finiteness of semantic possibilities in the network .", "edit_actions": [{"type": "R", "before": "posited for the global analysis of", "after": "proposed to fit the", "start_char_pos": 27, "end_char_pos": 61}, {"type": "R", "before": "semantic network of a complex operating system (", "after": "directed networks in", "start_char_pos": 85, "end_char_pos": 133}, {"type": "D", "before": ")", "after": null, "start_char_pos": 164, "end_char_pos": 165}, {"type": "R", "before": "frequency", "after": "degree", "start_char_pos": 178, "end_char_pos": 187}, {"type": "D", "before": "of this system", "after": null, "start_char_pos": 272, "end_char_pos": 286}, {"type": "R", "before": "richest", "after": "most richly linked", "start_char_pos": 337, "end_char_pos": 344}, {"type": "R", "before": "weakest", "after": "most poorly linked", "start_char_pos": 367, "end_char_pos": 374}, {"type": "R", "before": ", and exhibit a saturation behaviour arising from the finiteness of semantic possibilities in the network. To preserve uniqueness of operations in the network, the nodes obey an \"exclusion principle\", with no two nodes being exactly alike in their functionality. The parameters related to", "after": "and exhibit", "start_char_pos": 406, "end_char_pos": 694}, {"type": "R", "before": "behaviour", "after": "effects. The finite-size parameters", "start_char_pos": 707, "end_char_pos": 716}, {"type": "R", "before": "two directed networks of incoming and outgoing links", "after": "in-directed and out-directed networks", "start_char_pos": 761, "end_char_pos": 813}, {"type": "D", "before": ", over two generations", "after": null, "start_char_pos": 834, "end_char_pos": 856}, {"type": "D", "before": ",", "after": null, "start_char_pos": 883, "end_char_pos": 884}, {"type": "R", "before": "saturation", "after": "finite-size", "start_char_pos": 900, "end_char_pos": 910}, {"type": "R", "before": "oppositely affected", "after": "opposite in nature", "start_char_pos": 971, "end_char_pos": 990}, {"type": "R", "before": "whose top nodes form the foundation of the entire network, the", "after": "the", "start_char_pos": 1026, "end_char_pos": 1088}, {"type": "R", "before": "model, evolving towards a steady scale-free frequency distribution of nodes, determines the finite limit to the number of top", "after": "evolution also corresponds to the limiting count of rich", "start_char_pos": 1121, "end_char_pos": 1246}, {"type": "A", "before": null, "after": ". The number of nodes contributing out-directed links grows with each passing generation of software release, but this growth ultimately saturates towards a finite value due to the finiteness of semantic possibilities in the network", "start_char_pos": 1299, "end_char_pos": 1299}], "sents_char_pos": [0, 332, 512, 668, 992]} {"doc_id": "0902.1328", "revision_depth": "2", "before_revision": "We study the class of Azema-Yor processes defined from a general semimartingale with a continuous running supremum process. We show that they arise as unique strong solutions of the Bachelier stochastic differential equation which we prove is equivalent to the Drawdown equation. Solutions of the latter have the drawdown property: they always stay above a given function of their past supremum . We then show that any process which satisfies the drawdown property is in fact an Azema-Yor process. The proofs exploit group structure of the set of Azema-Yor processes, indexed by functions, which we introduce. Secondly we study in detail Azema-Yor martingales defined from a non-negative local martingale converging to zero at infinity. We establish relations between Average Value at Risk, Drawdown function, Hardy-Littlewood transform and its generalised inverse. In particular, we construct Azema-Yor martingales with a given terminal law and this allows us to rediscover the Azema-Yor solution to the Skorokhod embedding problem. Finally, we characterise Azema-Yor martingales showing they are optimal relative to the concave ordering of terminal variables among martingales whose maximum dominates stochastically a given benchmark.", "after_revision": "We study the class of Az\\'ema-Yor processes defined from a general semimartingale with a continuous running maximum process. We show that they arise as unique strong solutions of the Bachelier stochastic differential equation which we prove is equivalent to the drawdown equation. Solutions of the latter have the drawdown property: they always stay above a given function of their past maximum . We then show that any process which satisfies the drawdown property is in fact an Az\\'ema-Yor process. The proofs exploit group structure of the set of Az\\'ema-Yor processes, indexed by functions, which we introduce. We investigate in detail Az\\'ema-Yor martingales defined from a nonnegative local martingale converging to zero at infinity. We establish relations between average value at risk, drawdown function, Hardy-Littlewood transform and its inverse. In particular, we construct Az\\'ema-Yor martingales with a given terminal law and this allows us to rediscover the Az\\'ema-Yor solution to the Skorokhod embedding problem. Finally, we characterize Az\\'ema-Yor martingales showing they are optimal relative to the concave ordering of terminal variables among martingales whose maximum dominates stochastically a given benchmark.", "edit_actions": [{"type": "R", "before": "Azema-Yor", "after": "Az\\'ema-Yor", "start_char_pos": 22, "end_char_pos": 31}, {"type": "R", "before": "supremum", "after": "maximum", "start_char_pos": 106, "end_char_pos": 114}, {"type": "R", "before": "Drawdown", "after": "drawdown", "start_char_pos": 261, "end_char_pos": 269}, {"type": "R", "before": "supremum", "after": "maximum", "start_char_pos": 386, "end_char_pos": 394}, {"type": "R", "before": "Azema-Yor", "after": "Az\\'ema-Yor", "start_char_pos": 479, "end_char_pos": 488}, {"type": "R", "before": "Azema-Yor", "after": "Az\\'ema-Yor", "start_char_pos": 547, "end_char_pos": 556}, {"type": "R", "before": "Secondly we study in detail Azema-Yor", "after": "We investigate in detail Az\\'ema-Yor", "start_char_pos": 610, "end_char_pos": 647}, {"type": "R", "before": "non-negative", "after": "nonnegative", "start_char_pos": 675, "end_char_pos": 687}, {"type": "R", "before": "Average Value at Risk, Drawdown", "after": "average value at risk, drawdown", "start_char_pos": 768, "end_char_pos": 799}, {"type": "D", "before": "generalised", "after": null, "start_char_pos": 845, "end_char_pos": 856}, {"type": "R", "before": "Azema-Yor", "after": "Az\\'ema-Yor", "start_char_pos": 894, "end_char_pos": 903}, {"type": "R", "before": "Azema-Yor", "after": "Az\\'ema-Yor", "start_char_pos": 979, "end_char_pos": 988}, {"type": "R", "before": "characterise Azema-Yor", "after": "characterize Az\\'ema-Yor", "start_char_pos": 1046, "end_char_pos": 1068}], "sents_char_pos": [0, 123, 279, 396, 497, 609, 736, 865, 1033]} {"doc_id": "0905.2770", "revision_depth": "1", "before_revision": "In this paper we revisit the problem of pricing and hedging plain vanilla single currency interest rate derivatives using different yield curves for market coherent estimation of discount factors and forward rates with different underlying rate tenors (e. g. Euribor 3 months, 6 months,.etc.). Within such double-curve-single-currency framework, adopted by the market after the liquidity crisis started in summer 2007, standard single-curve no arbitrage relations are no longer valid and can be formally recovered through the introduction of a basis adjustment . Numerical results show that the resulting basis adjustment curves may display an oscillating micro-term structure that may induce appreciable effects on the price of interest rate instruments. Recurring to the foreign-currency analogy we also derive no arbitrage, double-curve market-like formulas for basic plain vanilla interest rate derivatives, FRAs, swaps, caps/floors and swaptions in particular. These expressions include a quanto adjustment typical of cross-currency derivatives, naturally originated by the change between the numeraires associated to the two yield curves, that carries on a volatility and correlation dependence. Numerical scenarios confirm that such correction can be non-negligible , thus making unadjusted double-curve prices, in principle, not arbitrage free .", "after_revision": "We revisit the problem of pricing and hedging plain vanilla single-currency interest rate derivatives using multiple distinct yield curves for market coherent estimation of discount factors and forward rates with different underlying rate tenors . Within such double-curve-single-currency framework, adopted by the market after the credit-crunch crisis started in summer 2007, standard single-curve no-arbitrage relations are no longer valid , and can be recovered by taking properly into account the forward basis bootstrapped from market basis swaps . Numerical results show that the resulting forward basis curves may display a richer micro-term structure that may induce appreciable effects on the price of interest rate instruments. By recurring to the foreign-currency analogy we also derive generalised no-arbitrage double-curve market-like formulas for basic plain vanilla interest rate derivatives, FRAs, swaps, caps/floors and swaptions in particular. These expressions include a quanto adjustment typical of cross-currency derivatives, naturally originated by the change between the numeraires associated to the two yield curves, that carries on a volatility and correlation dependence. Numerical scenarios confirm that such correction can be non negligible , thus making unadjusted double-curve prices, in principle, not arbitrage free . Both the forward basis and the quanto adjustment find a natural financial explanation in terms of counterparty risk .", "edit_actions": [{"type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 0, "end_char_pos": 16}, {"type": "R", "before": "single currency", "after": "single-currency", "start_char_pos": 74, "end_char_pos": 89}, {"type": "R", "before": "different", "after": "multiple distinct", "start_char_pos": 122, "end_char_pos": 131}, {"type": "R", "before": "(e. g. Euribor 3 months, 6 months,.etc.).", "after": ".", "start_char_pos": 252, "end_char_pos": 293}, {"type": "R", "before": "liquidity", "after": "credit-crunch", "start_char_pos": 378, "end_char_pos": 387}, {"type": "R", "before": "no arbitrage", "after": "no-arbitrage", "start_char_pos": 441, "end_char_pos": 453}, {"type": "A", "before": null, "after": ",", "start_char_pos": 484, "end_char_pos": 484}, {"type": "R", "before": "formally recovered through the introduction of a basis adjustment", "after": "recovered by taking properly into account the forward basis bootstrapped from market basis swaps", "start_char_pos": 496, "end_char_pos": 561}, {"type": "R", "before": "basis adjustment", "after": "forward basis", "start_char_pos": 606, "end_char_pos": 622}, {"type": "R", "before": "an oscillating", "after": "a richer", "start_char_pos": 642, "end_char_pos": 656}, {"type": "R", "before": "Recurring", "after": "By recurring", "start_char_pos": 757, "end_char_pos": 766}, {"type": "R", "before": "no arbitrage,", "after": "generalised no-arbitrage", "start_char_pos": 814, "end_char_pos": 827}, {"type": "R", "before": "non-negligible", "after": "non negligible", "start_char_pos": 1259, "end_char_pos": 1273}, {"type": "A", "before": null, "after": ". Both the forward basis and the quanto adjustment find a natural financial explanation in terms of counterparty risk", "start_char_pos": 1353, "end_char_pos": 1353}], "sents_char_pos": [0, 293, 756, 966, 1202]} {"doc_id": "0905.3502", "revision_depth": "1", "before_revision": "The study of biochemical pathways usually focuses on a small section of the protein interaction network. Fluctuations in such a system are not only generated intrinsically by molecular dynamics, but also extrinsically, by interactions of the system with the rest of the network and its environment. Concentration fluctuations of a substance outside the studied system can enter it through a nonlinear uptake reaction which acts as a nonlinear filter. Varying the intensity of the input noise varies the mean of the output noise after the passage through the filter, which causes a change of stability properties of the system. Using an analytical method of small noise expansion, I prove that when weak and rapid noise enters the system through a reaction of Michaelis-Menten type (reactionrate function monotonically increasing and concave) , then the steady states of the system always shift to the right as noise intensity increases. I demonstrate this by an example of two different models of lac operon . The bistable switch responds to fluctuations in extracellular TMG/lactose concentration in an asymmetric manner because of the displacement of its bistability region to the right: As noise intensity increases, uninduction becomes easier and induction becomes more difficult . The steady-state displacement due to weak and rapid extrinsic noise passing through a nonlinear filter is a universal phenomenon: It is independent of the kinetics of the system but it only depends on the filtering function. The calculation method presented enables even qualitative predictions of this effect, only by inspection of the experimental data .", "after_revision": "The study of biochemical pathways usually focuses on a small section of a protein interactions network. Two distinct sources contribute to the noise in such a system : intrinsic noise, inherent in the studied reactions, and extrinsic noise generated in other parts of the network or in the environment. We study the effect of extrinsic noise entering the system through a nonlinear uptake reaction which acts as a nonlinear filter. Varying input noise intensity varies the mean of the noise after the passage through the filter, which changes the stability properties of the system. The steady-state displacement due to small noise is independent on the kinetics of the system but it only depends on the nonlinearity of the input function. For monotonically increasing and concave input functions such as the Michaelis-Menten uptake rate, we give a simple argument based on the small-noise expansion, which enables qualitative predictions of the steady-state displacement only by inspection of experimental data: when weak and rapid noise enters the system through a Michaelis-Menten reaction , then the graph of the system's steady states vs. the mean of the input signal always shifts to the right as noise intensity increases. We test the predictions on two models of lac operon , where TMG/lactose uptake is driven by a Michaelis-Menten enzymatic process. We show that as a consequence of the steady state displacement due to fluctuations in extracellular TMG/lactose concentration the lac switch responds in an asymmetric manner : as noise intensity increases, switching off lactose metabolism becomes easier and switching it on becomes more difficult .", "edit_actions": [{"type": "R", "before": "the protein interaction network. Fluctuations", "after": "a protein interactions network. Two distinct sources contribute to the noise", "start_char_pos": 72, "end_char_pos": 117}, {"type": "R", "before": "are not only generated intrinsically by molecular dynamics, but also extrinsically, by interactions of the system with the rest of the network and its environment. Concentration fluctuations of a substance outside the studied system can enter it", "after": ": intrinsic noise, inherent in the studied reactions, and extrinsic noise generated in other parts of the network or in the environment. We study the effect of extrinsic noise entering the system", "start_char_pos": 135, "end_char_pos": 380}, {"type": "R", "before": "the intensity of the input noise", "after": "input noise intensity", "start_char_pos": 459, "end_char_pos": 491}, {"type": "D", "before": "output", "after": null, "start_char_pos": 515, "end_char_pos": 521}, {"type": "R", "before": "causes a change of", "after": "changes the", "start_char_pos": 572, "end_char_pos": 590}, {"type": "R", "before": "Using an analytical method of small noise expansion, I prove that", "after": "The steady-state displacement due to small noise is independent on the kinetics of the system but it only depends on the nonlinearity of the input function. For monotonically increasing and concave input functions such as the Michaelis-Menten uptake rate, we give a simple argument based on the small-noise expansion, which enables qualitative predictions of the steady-state displacement only by inspection of experimental data:", "start_char_pos": 627, "end_char_pos": 692}, {"type": "D", "before": "reaction of", "after": null, "start_char_pos": 747, "end_char_pos": 758}, {"type": "R", "before": "type (reactionrate function monotonically increasing and concave)", "after": "reaction", "start_char_pos": 776, "end_char_pos": 841}, {"type": "R", "before": "steady states of the system always shift", "after": "graph of the system's steady states vs. the mean of the input signal always shifts", "start_char_pos": 853, "end_char_pos": 893}, {"type": "R", "before": "I demonstrate this by an example of two different", "after": "We test the predictions on two", "start_char_pos": 937, "end_char_pos": 986}, {"type": "R", "before": ". The bistable switch responds", "after": ", where TMG/lactose uptake is driven by a Michaelis-Menten enzymatic process. We show that as a consequence of the steady state displacement due", "start_char_pos": 1008, "end_char_pos": 1038}, {"type": "A", "before": null, "after": "the lac switch responds", "start_char_pos": 1098, "end_char_pos": 1098}, {"type": "R", "before": "because of the displacement of its bistability region to the right: As", "after": ": as", "start_char_pos": 1123, "end_char_pos": 1193}, {"type": "R", "before": "uninduction", "after": "switching off lactose metabolism", "start_char_pos": 1221, "end_char_pos": 1232}, {"type": "R", "before": "induction", "after": "switching it on", "start_char_pos": 1252, "end_char_pos": 1261}, {"type": "D", "before": ". The steady-state displacement due to weak and rapid extrinsic noise passing through a nonlinear filter is a universal phenomenon: It is independent of the kinetics of the system but it only depends on the filtering function. The calculation method presented enables even qualitative predictions of this effect, only by inspection of the experimental data", "after": null, "start_char_pos": 1285, "end_char_pos": 1641}], "sents_char_pos": [0, 104, 298, 450, 626, 936, 1009, 1286, 1511]} {"doc_id": "0906.4279", "revision_depth": "1", "before_revision": "We propose a new approximation to the description of intracellular dynamics , based on quantum principles. We introduce the notion of \"Catalytic force\" Cf , as a back-action effect of the molecular target of catalysis on the catalytic microenvironment , adjusting the microenvironment towards a state that facilitates the catalytic act. This mechanism is proposed as a new physical justification for the URLanization phenomenon, having an advantage over more traditional approaches based on statistical mechanics of open systems far from equilibrium, as it does not encounter the problem of \"tradeoff between stability and complexity\" at the level of individual cell. The Cf is considered as a force of reaction, which keeps the physical state of the cell close to the ground state, where all enzymatic acts work perfectly well. Given that ground state is subject to unitary evolution, this notion is proposed as a starting point in a more general strategy of quantum description of intracellular processes, termed here \"Euclidean approach\". The next step of this strategy is transition from the description of ground state to that one of growth , and we suggest how it can be accomplished using arguments from the fluctuation-dissipation theorem. Finally, we argue that biological adaptation could be a general situation to experimentally observe quantum entanglement in biological systems. Given that the most reliable and informative observable of an individual cell is the sequence of its genome, we propose that the nonclassical correlations between individual molecular events at the a single cell level could be easiest to detect using high throughput DNA sequencing.", "after_revision": "Application of quantum principles to living cells requires a new approximation of the full quantum mechanical description of intracellular dynamics . We discuss what principal elements any such good approximation should contain. As one such element, the notion of \"Catalytic force\" Cf is introduced. Cf is the effect of the molecular target of catalysis on the catalytic microenvironment that adjusts the microenvironment towards a state that facilitates the catalytic act. This phenomenon is experimentally testable and has an intriguing implication for URLanization and evolution, as it amounts to \"optimization without natural selection of replicators\". Unlike the statistical-mechanical approaches to URLanization, the Cf principle does not encounter the problem of \"tradeoff between stability and complexity\" at the level of individual cell. Physically, the Cf is considered as a harmonic-like force of reaction, which keeps the state of the cell close to the ground state, defined here as a state where enzymatic acts work most efficiently. Ground state is subject to unitary evolution, and serves as a starting point in a general strategy of quantum description of intracellular processes, termed here \"Euclidean approach\". The next step of this strategy is transition from the description of ground state to that one of growing state , and we suggest how it can be accomplished using arguments from the fluctuation-dissipation theorem. Finally, given that the most reliable and informative observable of an individual cell is the sequence of its genome, we propose that the non-classical correlations between individual molecular events at the single cell level could be easiest to detect using high throughput DNA sequencing.", "edit_actions": [{"type": "R", "before": "We propose", "after": "Application of quantum principles to living cells requires", "start_char_pos": 0, "end_char_pos": 10}, {"type": "R", "before": "to the", "after": "of the full quantum mechanical", "start_char_pos": 31, "end_char_pos": 37}, {"type": "R", "before": ", based on quantum principles. We introduce", "after": ". We discuss what principal elements any such good approximation should contain. As one such element,", "start_char_pos": 76, "end_char_pos": 119}, {"type": "R", "before": ", as a back-action", "after": "is introduced. Cf is the", "start_char_pos": 155, "end_char_pos": 173}, {"type": "R", "before": ", adjusting", "after": "that adjusts", "start_char_pos": 252, "end_char_pos": 263}, {"type": "R", "before": "mechanism is proposed as a new physical justification for the URLanization phenomenon, having an advantage over more traditional approaches based on statistical mechanics of open systems far from equilibrium, as it", "after": "phenomenon is experimentally testable and has an intriguing implication for URLanization and evolution, as it amounts to \"optimization without natural selection of replicators\". Unlike the statistical-mechanical approaches to URLanization, the Cf principle", "start_char_pos": 342, "end_char_pos": 556}, {"type": "R", "before": "The", "after": "Physically, the", "start_char_pos": 668, "end_char_pos": 671}, {"type": "A", "before": null, "after": "harmonic-like", "start_char_pos": 694, "end_char_pos": 694}, {"type": "D", "before": "physical", "after": null, "start_char_pos": 730, "end_char_pos": 738}, {"type": "R", "before": "where all", "after": "defined here as a state where", "start_char_pos": 784, "end_char_pos": 793}, {"type": "R", "before": "perfectly well. Given that ground", "after": "most efficiently. Ground", "start_char_pos": 814, "end_char_pos": 847}, {"type": "R", "before": "this notion is proposed", "after": "and serves", "start_char_pos": 887, "end_char_pos": 910}, {"type": "D", "before": "more", "after": null, "start_char_pos": 936, "end_char_pos": 940}, {"type": "R", "before": "growth", "after": "growing state", "start_char_pos": 1140, "end_char_pos": 1146}, {"type": "R", "before": "we argue that biological adaptation could be a general situation to experimentally observe quantum entanglement in biological systems. Given that", "after": "given that", "start_char_pos": 1258, "end_char_pos": 1403}, {"type": "R", "before": "nonclassical", "after": "non-classical", "start_char_pos": 1522, "end_char_pos": 1534}, {"type": "D", "before": "a", "after": null, "start_char_pos": 1591, "end_char_pos": 1592}], "sents_char_pos": [0, 106, 336, 667, 829, 1042, 1248, 1392]} {"doc_id": "0909.1974", "revision_depth": "1", "before_revision": "This article aims at reviewing recent empirical and theoretical developments usually grouped under the term Econophysics. This new interdisciplinary field has grown in various directions: theoretical macroeconomics (wealth distributions), microstructure of financial markets (order book modeling ), econometrics of financial bubbles and crashes . We give a brief introduction and begin with discussing interactions between Physics, Mathematics, Economics and Finance that led to the emergence of Econophysics in the second part. Then the third part is dedicated to empirical studies revealing statistical properties of financial time series. We begin the presentation with the widely acknowledged \"stylized facts \" describing the distribution of the returns of financial assets: fat-tails , volatility clustering, etc. Then we show that some of these properties are directly linked to the way \"time \" is taken into account. We continue with the statistical properties observed on order books in financial markets. Contributions to the study of correlations of assets such as random matrix theory and graph theory are finally presentedin this part. The fourth part of our reviewdeals with models in Econophysics through the point of view of agent-based modeling. Using previous work originally presented in the fields of behavioural finance and market microstructure theory, econophysicists have developed agent-based models of order-driven markets that are extensively reviewed here. We then turn to models of wealth distribution where an agent-based approach also prevails: kinetic theory models , and continue with game theory models and review the now classic minority games. We end this review by providing an outlook on possible directions of research .", "after_revision": "This article aims at reviewing recent empirical and theoretical developments usually grouped under the term Econophysics. Since its name was coined in 1995 by merging the words Economics and Physics, this new interdisciplinary field has grown in various directions: theoretical macroeconomics (wealth distributions), microstructure of financial markets (order book modelling ), econometrics of financial bubbles and crashes , etc. In the first part of the review, we discuss on the emergence of Econophysics . Then we present empirical studies revealing statistical properties of financial time series. We begin the presentation with the widely acknowledged stylized facts which describe the returns of financial assets- fat tails , volatility clustering, autocorrelation, etc.- and recall that some of these properties are directly linked to the way time is taken into account. We continue with the statistical properties observed on order books in financial markets. For the sake of illustrating this review, (nearly) all the stated facts are reproduced using our own high-frequency financial database. Finally, contributions to the study of correlations of assets such as random matrix theory and graph theory are presented. In the second part of the review, we deal with models in Econophysics through the point of view of agent-based modelling. Amongst a large number of multi-agent-based models, we have identified three representative areas. First, using previous work originally presented in the fields of behavioural finance and market microstructure theory, econophysicists have developed agent-based models of order-driven markets that are extensively presented here. Second, kinetic theory models designed to explain some empirical facts on wealth distribution are reviewed. Third, we briefly summarize game theory models by reviewing the now classic minority game and related problems .", "edit_actions": [{"type": "R", "before": "This", "after": "Since its name was coined in 1995 by merging the words Economics and Physics, this", "start_char_pos": 122, "end_char_pos": 126}, {"type": "R", "before": "modeling", "after": "modelling", "start_char_pos": 287, "end_char_pos": 295}, {"type": "R", "before": ". We give a brief introduction and begin with discussing interactions between Physics, Mathematics, Economics and Finance that led to", "after": ", etc. In the first part of the review, we discuss on", "start_char_pos": 345, "end_char_pos": 478}, {"type": "R", "before": "in the second part. Then the third part is dedicated to", "after": ". Then we present", "start_char_pos": 509, "end_char_pos": 564}, {"type": "R", "before": "\"stylized facts \" describing the distribution of the", "after": "stylized facts which describe the", "start_char_pos": 697, "end_char_pos": 749}, {"type": "R", "before": "assets: fat-tails", "after": "assets- fat tails", "start_char_pos": 771, "end_char_pos": 788}, {"type": "R", "before": "etc. Then we show", "after": "autocorrelation, etc.- and recall", "start_char_pos": 814, "end_char_pos": 831}, {"type": "R", "before": "\"time \"", "after": "time", "start_char_pos": 893, "end_char_pos": 900}, {"type": "R", "before": "Contributions", "after": "For the sake of illustrating this review, (nearly) all the stated facts are reproduced using our own high-frequency financial database. Finally, contributions", "start_char_pos": 1014, "end_char_pos": 1027}, {"type": "R", "before": "finally presentedin this part. The fourth part of our reviewdeals", "after": "presented. In the second part of the review, we deal", "start_char_pos": 1117, "end_char_pos": 1182}, {"type": "R", "before": "modeling. Using", "after": "modelling. Amongst a large number of multi-agent-based models, we have identified three representative areas. First, using", "start_char_pos": 1252, "end_char_pos": 1267}, {"type": "R", "before": "reviewed here. We then turn to models of wealth distribution where an agent-based approach also prevails:", "after": "presented here. Second,", "start_char_pos": 1469, "end_char_pos": 1574}, {"type": "R", "before": ", and continue with", "after": "designed to explain some empirical facts on wealth distribution are reviewed. Third, we briefly summarize", "start_char_pos": 1597, "end_char_pos": 1616}, {"type": "R", "before": "and review", "after": "by reviewing", "start_char_pos": 1636, "end_char_pos": 1646}, {"type": "R", "before": "games. We end this review by providing an outlook on possible directions of research", "after": "game and related problems", "start_char_pos": 1672, "end_char_pos": 1756}], "sents_char_pos": [0, 121, 346, 528, 641, 818, 923, 1013, 1147, 1261, 1483, 1678]} {"doc_id": "0909.5125", "revision_depth": "1", "before_revision": "A bacteria colony may develop a small number of cells genetically identical , but phenotypically different from the majority cells in the populations . These so-called persister cells are insensitive to antibiotics treatment, and contribute to the serious problem of drug resistance. Resonant activation (RA) is a well-studied phenomenon in thermally activated barrier crossing events, where internal noise can constructively facilitate the barrier crossing . Through stochastic Gilliespie simulation with a generic toggle switch model, we demonstrated that the phenomenon also exist for cell phenotype switching . Then we used coupled single cell and population simulations to demonstrate that one can greatly reduce the time and the total amount of antibiotics needed to extinct a bacteria populationwith RA. We also suggested that RA can find application in other situations such as cancer radiotherapy .", "after_revision": "A bacterial colony may develop a small number of cells genetically identical to , but phenotypically different from other normally growing bacteria . These so-called persister cells keep themselves in a dormant state and thus are insensitive to antibiotic treatment, resulting in serious problems of drug resistance. In this paper, we proposed a novel strategy to \"kill\" persister cells by triggering them to switch, in a fast and synchronized way, into normally growing cells that are susceptible to antibiotics. The strategy is based on resonant activation (RA) , a well-studied phenomenon in physics where the internal noise of a system can constructively facilitate fast and synchronized barrier crossings . Through stochastic Gilliespie simulation with a generic toggle switch model, we demonstrated that RA exists in the phenotypic switching of a single bacterium. Further, by coupling single cell level and population level simulations, we showed that with RA, one can greatly reduce the time and total amount of antibiotics needed to sterilize a bacterial population. We suggest that resonant activation is a general phenomenon in phenotypic transition, and can find other applications such as cancer therapy .", "edit_actions": [{"type": "R", "before": "bacteria", "after": "bacterial", "start_char_pos": 2, "end_char_pos": 10}, {"type": "A", "before": null, "after": "to", "start_char_pos": 76, "end_char_pos": 76}, {"type": "R", "before": "the majority cells in the populations", "after": "other normally growing bacteria", "start_char_pos": 113, "end_char_pos": 150}, {"type": "A", "before": null, "after": "keep themselves in a dormant state and thus", "start_char_pos": 185, "end_char_pos": 185}, {"type": "R", "before": "antibiotics treatment, and contribute to the serious problem", "after": "antibiotic treatment, resulting in serious problems", "start_char_pos": 205, "end_char_pos": 265}, {"type": "R", "before": "Resonant", "after": "In this paper, we proposed a novel strategy to \"kill\" persister cells by triggering them to switch, in a fast and synchronized way, into normally growing cells that are susceptible to antibiotics. The strategy is based on resonant", "start_char_pos": 286, "end_char_pos": 294}, {"type": "R", "before": "is", "after": ",", "start_char_pos": 311, "end_char_pos": 313}, {"type": "R", "before": "thermally activated barrier crossing events, where internal noise", "after": "physics where the internal noise of a system", "start_char_pos": 343, "end_char_pos": 408}, {"type": "R", "before": "the barrier crossing", "after": "fast and synchronized barrier crossings", "start_char_pos": 439, "end_char_pos": 459}, {"type": "R", "before": "the phenomenon also exist for cell phenotype switching . Then we used coupled single cell and population simulations to demonstrate that", "after": "RA exists in the phenotypic switching of a single bacterium. Further, by coupling single cell level and population level simulations, we showed that with RA,", "start_char_pos": 560, "end_char_pos": 696}, {"type": "D", "before": "the", "after": null, "start_char_pos": 733, "end_char_pos": 736}, {"type": "R", "before": "extinct a bacteria populationwith RA. We also suggested that RA can find application in other situations", "after": "sterilize a bacterial population. We suggest that resonant activation is a general phenomenon in phenotypic transition, and can find other applications", "start_char_pos": 775, "end_char_pos": 879}, {"type": "R", "before": "radiotherapy", "after": "therapy", "start_char_pos": 895, "end_char_pos": 907}], "sents_char_pos": [0, 152, 285, 461, 616, 812]} {"doc_id": "1004.4402", "revision_depth": "1", "before_revision": "Futures trading is the core of futures business, and is considered as a typical complex system . To investigate the complexity of futures trading, we employ the analytical method of complex networks. First, we use real trading records from Shanghai Futures Exchange to construct futures trading networks, in which vertices are trading participants, and two vertices have a common edge if the two corresponding investors simultaneously appear in at least one trading record as a purchaser and a seller respectively. Then, we conduct a comprehensive statistical analysis on the constructed futures trading networks , and empirical results show that the futures trading networks exhibit such features as scale-free structure with interesting odd-even-degree divergence in low degree region , small-world effect, URLanization, power-law betweenness distribution, and shrinkage of both average path length and diameter as network size increases. To the best of our knowledge, this is the first work that uses real data to study futures trading networks, and we argue that the research results can shed light on the nature of real futures business.", "after_revision": "Futures trading is the core of futures business, and it is considered as one of the typical complex systems . To investigate the complexity of futures trading, we employ the analytical method of complex networks. First, we use real trading records from the Shanghai Futures Exchange to construct futures trading networks, in which nodes are trading participants, and two nodes have a common edge if the two corresponding investors appear simultaneously in at least one trading record as a purchaser and a seller respectively. Then, we conduct a comprehensive statistical analysis on the constructed futures trading networks . Empirical results show that the futures trading networks exhibit features such as scale-free behavior with interesting odd-even-degree divergence in low-degree regions , small-world effect, URLanization, power-law betweenness distribution, disassortative mixing, and shrinkage of both the average path length and the diameter as network size increases. To the best of our knowledge, this is the first work that uses real data to study futures trading networks, and we argue that the research results can shed light on the nature of real futures business.", "edit_actions": [{"type": "A", "before": null, "after": "it", "start_char_pos": 53, "end_char_pos": 53}, {"type": "R", "before": "a typical complex system", "after": "one of the typical complex systems", "start_char_pos": 71, "end_char_pos": 95}, {"type": "A", "before": null, "after": "the", "start_char_pos": 241, "end_char_pos": 241}, {"type": "R", "before": "vertices", "after": "nodes", "start_char_pos": 316, "end_char_pos": 324}, {"type": "R", "before": "vertices", "after": "nodes", "start_char_pos": 359, "end_char_pos": 367}, {"type": "R", "before": "simultaneously appear", "after": "appear simultaneously", "start_char_pos": 422, "end_char_pos": 443}, {"type": "R", "before": ", and empirical", "after": ". Empirical", "start_char_pos": 615, "end_char_pos": 630}, {"type": "R", "before": "such features", "after": "features such", "start_char_pos": 686, "end_char_pos": 699}, {"type": "R", "before": "structure", "after": "behavior", "start_char_pos": 714, "end_char_pos": 723}, {"type": "R", "before": "low degree region", "after": "low-degree regions", "start_char_pos": 771, "end_char_pos": 788}, {"type": "A", "before": null, "after": "disassortative mixing,", "start_char_pos": 861, "end_char_pos": 861}, {"type": "A", "before": null, "after": "the", "start_char_pos": 884, "end_char_pos": 884}, {"type": "A", "before": null, "after": "the", "start_char_pos": 909, "end_char_pos": 909}], "sents_char_pos": [0, 97, 200, 516, 945]} {"doc_id": "1005.1862", "revision_depth": "2", "before_revision": "We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations. We start by studying the most commonly used estimator, the {\\it realized covariance} (RCV) matrix. We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting spectral distribution (LSD) of the RCV matrix depends on the covolatility process not only through the targeting ICV matrix, but also on how the covolatility process varies in time . We establish a Marcenko-Pastur type theorem for weighted sample covariance matrices, based on which we further establish a Marcenko-Pastur type theorem for RCV matrices for a class \\mathcal{C \\sC of diffusion processes. The results explicitly demonstrate how the time-variability of the covolatility process affects the LSD of RCV matrix. We then propose an alternative estimator, the {\\it time-variation adjusted realized covariance} (TVARCV) matrix. We show that for diffusion processes in class \\mathcal{C \\sC , the TVARCV matrix possesses the desirable property that its LSD depends solely on that of the targeting ICV matrix through a Mar\\v{c}enko-Pastur equation .", "after_revision": "We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations. We start by studying the most commonly used estimator, the {\\it realized covariance} (RCV) matrix. We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting spectral distribution (LSD) of RCV depends on the covolatility process not only through the targeting ICV,but also on how the covolatility process varies in time . We establish a Marcenko-Pastur type theorem for weighted sample covariance matrices, based on which we obtain a Marcenko-Pastur type theorem for RCV for a class \\sC of diffusion processes. The results explicitly demonstrate how the time variability of the covolatility process affects the LSD of RCV . We further propose an alternative estimator, the {\\it time-variation adjusted realized covariance} (TVARCV) matrix. We show that for processes in class \\sC , the TVARCV possesses the desirable property that its LSD depends solely on that of the targeting ICV through the Mar\\v{c}enko-Pastur equation , and hence, in particular, the TVARCV can be used to recover the empirical spectral distribution of the ICV by using existing algorithms .", "edit_actions": [{"type": "R", "before": "the RCV matrix", "after": "RCV", "start_char_pos": 406, "end_char_pos": 420}, {"type": "R", "before": "not only through the targeting ICV matrix, but also on how the covolatility process varies in time", "after": "not only through the targeting ICV", "start_char_pos": 457, "end_char_pos": 555}, {"type": "A", "before": null, "after": ",", "start_char_pos": 555, "end_char_pos": 555}, {"type": "A", "before": null, "after": "but also on how the covolatility process varies in time", "start_char_pos": 555, "end_char_pos": 555}, {"type": "R", "before": "further establish", "after": "obtain", "start_char_pos": 661, "end_char_pos": 678}, {"type": "D", "before": "matrices", "after": null, "start_char_pos": 718, "end_char_pos": 726}, {"type": "D", "before": "\\mathcal{C", "after": null, "start_char_pos": 739, "end_char_pos": 749}, {"type": "R", "before": "time-variability", "after": "time variability", "start_char_pos": 821, "end_char_pos": 837}, {"type": "R", "before": "matrix. We then", "after": ". We further", "start_char_pos": 889, "end_char_pos": 904}, {"type": "D", "before": "diffusion", "after": null, "start_char_pos": 1027, "end_char_pos": 1036}, {"type": "D", "before": "\\mathcal{C", "after": null, "start_char_pos": 1056, "end_char_pos": 1066}, {"type": "D", "before": "matrix", "after": null, "start_char_pos": 1084, "end_char_pos": 1090}, {"type": "R", "before": "matrix through a", "after": "through the", "start_char_pos": 1181, "end_char_pos": 1197}, {"type": "A", "before": null, "after": ", and hence, in particular, the TVARCV can be used to recover the empirical spectral distribution of the ICV by using existing algorithms", "start_char_pos": 1227, "end_char_pos": 1227}], "sents_char_pos": [0, 144, 243, 557, 777, 896, 1009]} {"doc_id": "1012.0754", "revision_depth": "1", "before_revision": "We propose a general class of models for the simultaneous treatment of equity, corporate bonds, government bonds and derivatives. The noise is generated by a general affine Markov process. The framework allows for stochastic volatility, jumps, the possibility of default and correlations between different assets. We extend the notion of a discounted moment generation function of the log stock price to the case where the underlying can default and show how to calculate it in terms of a coupled system of generalized Riccati equations. This yields an efficient method to compute prices of power payoffs and Fourier transforms . European calls and puts as well as binaries and asset-or-nothing options can then be priced with the fast Fourier transform methods of Carr and Madan (1999) and Lee (2005). Other European payoffs can be approximated by a linear combination of power payoffs and vanilla options. We show the results to be superior to using only power payoffs or vanilla options. We also give conditions for our models to be complete if enough financial instruments are liquidly tradable and study dynamic hedging strategies. As an example we discuss a Heston-type stochastic volatility model with possibility of default and stochastic interest rates.", "after_revision": "We propose a general framework for the simultaneous modeling of equity, government bonds, corporate bonds and derivatives. Uncertainty is generated by a general affine Markov process. The setting allows for stochastic volatility, jumps, the possibility of default and correlation between different assets. We show how to calculate discounted complex moments by solving a coupled system of generalized Riccati equations. This yields an efficient method to compute prices of power payoffs . European calls and puts as well as binaries and asset-or-nothing options can be priced with the fast Fourier transform methods of Carr and Madan (1999) and Lee (2005). Other European payoffs can be approximated with a linear combination of government bonds, power payoffs and vanilla options. We show the results to be superior to using only government bonds and power payoffs or government bonds and vanilla options. We also give conditions for European continent claims in our framework to be replicable if enough financial instruments are liquidly tradable and study dynamic hedging strategies. As an example we discuss a Heston-type stochastic volatility model with possibility of default and stochastic interest rates.", "edit_actions": [{"type": "R", "before": "class of models", "after": "framework", "start_char_pos": 21, "end_char_pos": 36}, {"type": "R", "before": "treatment", "after": "modeling", "start_char_pos": 58, "end_char_pos": 67}, {"type": "R", "before": "corporate bonds, government", "after": "government bonds, corporate", "start_char_pos": 79, "end_char_pos": 106}, {"type": "R", "before": "The noise", "after": "Uncertainty", "start_char_pos": 130, "end_char_pos": 139}, {"type": "R", "before": "framework", "after": "setting", "start_char_pos": 193, "end_char_pos": 202}, {"type": "R", "before": "correlations", "after": "correlation", "start_char_pos": 275, "end_char_pos": 287}, {"type": "D", "before": "extend the notion of a discounted moment generation function of the log stock price to the case where the underlying can default and", "after": null, "start_char_pos": 317, "end_char_pos": 449}, {"type": "R", "before": "it in terms of", "after": "discounted complex moments by solving", "start_char_pos": 472, "end_char_pos": 486}, {"type": "D", "before": "and Fourier transforms", "after": null, "start_char_pos": 605, "end_char_pos": 627}, {"type": "D", "before": "then", "after": null, "start_char_pos": 707, "end_char_pos": 711}, {"type": "R", "before": "by", "after": "with", "start_char_pos": 846, "end_char_pos": 848}, {"type": "A", "before": null, "after": "government bonds,", "start_char_pos": 873, "end_char_pos": 873}, {"type": "A", "before": null, "after": "government bonds and", "start_char_pos": 958, "end_char_pos": 958}, {"type": "A", "before": null, "after": "government bonds and", "start_char_pos": 976, "end_char_pos": 976}, {"type": "R", "before": "our models to be complete", "after": "European continent claims in our framework to be replicable", "start_char_pos": 1022, "end_char_pos": 1047}], "sents_char_pos": [0, 129, 188, 313, 537, 802, 908, 993, 1139]} {"doc_id": "1101.0211", "revision_depth": "1", "before_revision": "We study spectra of directed networks with inhibitory and excitatory couplings. Particularly, we investigate eigenvector localization properties of various model networks with varying correlations \\tau among their entries. Spectra of random directed networks, where entries are completely un-correlated (\\tau=0) show circular distribution with delocalized eigenvectors, where non-random networks with correlated entries have localized eigenvectors. In order to understand the origin of localization we track the spectra as a function of connection probability and directionality both. The kind of inhibitory and excitatory connections we are considering, low connection probability leads to localized eigenstates near boundary of the circular region, whereas large connection probabilities give rise to isolated delocalized eigenstates. As connections are made directed by making some nodes inhibitory, some of eigenstates start occurring in complex conjugate pairs . The eigenvalue distribution along with localization measure show rich pattern. Spectra of networks having modular structure show distinguishable different features than the random networks. For a very well distinguished community structure (rewiring probability p_r \\sim 0) , the whole spectra is localized except some of eigenstates at boundary . As p_r is increased and network deviates from community structure there is a sudden change in the localization property for very small value of deformation from the perfect community structure. Furthermore, we investigate spectral properties of a metabolic networks of Zebra-fish and compare with spectral properties of various model networks.", "after_revision": "We study spectra of directed networks with inhibitory and excitatory couplings. We investigate in particular eigenvector localization properties of various model networks for different value of correlation among their entries. Spectra of random networks, with completely uncorrelated entries show a circular distribution with delocalized eigenvectors, where as networks with correlated entries have localized eigenvectors. In order to understand the origin of localization we track the spectra as a function of connection probability and directionality . As connections are made directed , eigenstates start occurring in complex conjugate pairs and the eigenvalue distribution combined with the localization measure shows a rich pattern. Moreover, for a very well distinguished community structure , the whole spectrum is localized except few eigenstates at boundary of the circular distribution. As the network deviates from the community structure there is a sudden change in the localization property for a very small value of deformation from the perfect community structure. We search for this effect for the whole range of correlation strengths and for different community configurations. Furthermore, we investigate spectral properties of a metabolic network of zebrafish, and compare them with those of the model networks.", "edit_actions": [{"type": "R", "before": "Particularly, we investigate", "after": "We investigate in particular", "start_char_pos": 80, "end_char_pos": 108}, {"type": "R", "before": "with varying correlations \\tau", "after": "for different value of correlation", "start_char_pos": 171, "end_char_pos": 201}, {"type": "R", "before": "directed networks, where entries are completely un-correlated (\\tau=0) show", "after": "networks, with completely uncorrelated entries show a", "start_char_pos": 241, "end_char_pos": 316}, {"type": "R", "before": "non-random", "after": "as", "start_char_pos": 376, "end_char_pos": 386}, {"type": "R", "before": "both. The kind of inhibitory and excitatory connections we are considering, low connection probability leads to localized eigenstates near boundary of the circular region, whereas large connection probabilities give rise to isolated delocalized eigenstates.", "after": ".", "start_char_pos": 579, "end_char_pos": 836}, {"type": "R", "before": "by making some nodes inhibitory, some of", "after": ",", "start_char_pos": 870, "end_char_pos": 910}, {"type": "R", "before": ". The eigenvalue distribution along with localization measure show", "after": "and the eigenvalue distribution combined with the localization measure shows a", "start_char_pos": 966, "end_char_pos": 1032}, {"type": "R", "before": "Spectra of networks having modular structure show distinguishable different features than the random networks. For", "after": "Moreover, for", "start_char_pos": 1047, "end_char_pos": 1161}, {"type": "D", "before": "(rewiring probability p_r \\sim 0)", "after": null, "start_char_pos": 1208, "end_char_pos": 1241}, {"type": "R", "before": "spectra", "after": "spectrum", "start_char_pos": 1254, "end_char_pos": 1261}, {"type": "R", "before": "some of", "after": "few", "start_char_pos": 1282, "end_char_pos": 1289}, {"type": "R", "before": ". As p_r is increased and", "after": "of the circular distribution. As the", "start_char_pos": 1314, "end_char_pos": 1339}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1362, "end_char_pos": 1362}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1441, "end_char_pos": 1441}, {"type": "A", "before": null, "after": "We search for this effect for the whole range of correlation strengths and for different community configurations.", "start_char_pos": 1512, "end_char_pos": 1512}, {"type": "R", "before": "networks of Zebra-fish and compare with spectral properties of various", "after": "network of zebrafish, and compare them with those of the", "start_char_pos": 1576, "end_char_pos": 1646}], "sents_char_pos": [0, 79, 222, 448, 584, 836, 967, 1046, 1157, 1315, 1511]} {"doc_id": "1104.0359", "revision_depth": "2", "before_revision": "We study the asymptotic behaviour of the difference between the Value at Risks VaR(L) and VaR(L+S) for heavy tailed random variables L and S as an application to the sensitivity analysis of quantitative operational risk management in the framework of an advanced measurement approach (AMA) of Basel II . Here the variable L describes the loss amount of the present risk profile and S means the loss amount caused by an additional loss factor. We have different types of results according to the magnitude of the relationship of the thicknesses of the tails of L and S. Especially if the tail of S is sufficiently thinner than that of L, then the difference between prior and posterior risk amounts VaR(L+S) - VaR(L) is asymptotically equivalent to the component VaR of S (which is equal to the expected lossof S when L and S are independent) .", "after_revision": "We study the asymptotic behavior of the difference between the values at risk VaR(L) and VaR(L+S) for heavy tailed random variables L and S for application in sensitivity analysis of quantitative operational risk management within the framework of the advanced measurement approach of Basel II (and III). Here L describes the loss amount of the present risk profile and S describes the loss amount caused by an additional loss factor. We obtain different types of results according to the relative magnitudes of the thicknesses of the tails of L and S. In particular, if the tail of S is sufficiently thinner than the tail of L, then the difference between prior and posterior risk amounts VaR(L+S) - VaR(L) is asymptotically equivalent to the expectation (expected loss) of S .", "edit_actions": [{"type": "R", "before": "behaviour", "after": "behavior", "start_char_pos": 24, "end_char_pos": 33}, {"type": "R", "before": "Value at Risks", "after": "values at risk", "start_char_pos": 64, "end_char_pos": 78}, {"type": "R", "before": "as an application to the", "after": "for application in", "start_char_pos": 141, "end_char_pos": 165}, {"type": "R", "before": "in", "after": "within", "start_char_pos": 231, "end_char_pos": 233}, {"type": "R", "before": "an", "after": "the", "start_char_pos": 251, "end_char_pos": 253}, {"type": "D", "before": "(AMA)", "after": null, "start_char_pos": 284, "end_char_pos": 289}, {"type": "R", "before": ". Here the variable", "after": "(and III). Here", "start_char_pos": 302, "end_char_pos": 321}, {"type": "R", "before": "means", "after": "describes", "start_char_pos": 384, "end_char_pos": 389}, {"type": "R", "before": "have", "after": "obtain", "start_char_pos": 446, "end_char_pos": 450}, {"type": "R", "before": "magnitude of the relationship of the", "after": "relative magnitudes of the", "start_char_pos": 495, "end_char_pos": 531}, {"type": "R", "before": "Especially", "after": "In particular,", "start_char_pos": 569, "end_char_pos": 579}, {"type": "R", "before": "that", "after": "the tail", "start_char_pos": 626, "end_char_pos": 630}, {"type": "R", "before": "component VaR of S (which is equal to the expected lossof S when L and S are independent)", "after": "expectation (expected loss) of S", "start_char_pos": 752, "end_char_pos": 841}], "sents_char_pos": [0, 442]} {"doc_id": "1108.0187", "revision_depth": "1", "before_revision": "Our purpose in this paper is to obtain the exact distribution of the number of buffer starvations within a sequence of N consecutive packet arrivals . The buffer is modeled as an M/M/1 queue, plus the consideration of bursty arrivals characterized by an interrupted Poisson process . When the buffer is empty, the service restarts after a certain amount of packets are prefetched . With this goal, we propose two approaches , one of which is based on Ballot theorem , and the other uses recursive equations. The Ballot theorem approach gives an explicit solution, but at the cost of the high complexity order in certain circumstances . The recursive approach, though not offering an explicit result, needs fewer computations.We further propose a fluid analysis of starvation probability on the file level, given the distribution of file size and the traffic intensity. The starvation probabilities of this paper have many potential applications . We apply them to optimize the quality of experience (QoE) of media streaming service , by exploiting the tradeoff between the start-up delay and the starvation .", "after_revision": "Our purpose in this paper is to characterize buffer starvations for streaming services . The buffer is modeled as an M/M/1 queue, plus the consideration of bursty arrivals . When the buffer is empty, the service restarts after a certain amount of packets are prefetched . With this goal, we propose two approaches to obtain theexact distribution of the number of buffer starvations , one of which is based on Ballot theorem , and the other uses recursive equations. The Ballot theorem approach gives an explicit result. We extend this approach to the scenario with a constant playback rate using T\\`{a . The recursive approach, though not offering an explicit result, can obtain the distribution of starvations with non-independent and identically distributed (i.i.d.) arrival process in which an ON/OFF bursty arrival process is considered in this work. We further compute the starvation probability as a function of the amount of prefetched packets for a large number of files via a fluid analysis. Among many potential applications of starvation analysis, we show how to apply it to optimize the objective quality of experience (QoE) of media streaming , by exploiting the tradeoff between startup/rebuffering delay and starvations .", "edit_actions": [{"type": "R", "before": "obtain the exact distribution of the number of buffer starvations within a sequence of N consecutive packet arrivals", "after": "characterize buffer starvations for streaming services", "start_char_pos": 32, "end_char_pos": 148}, {"type": "D", "before": "characterized by an interrupted Poisson process", "after": null, "start_char_pos": 234, "end_char_pos": 281}, {"type": "R", "before": "prefetched", "after": "prefetched", "start_char_pos": 369, "end_char_pos": 379}, {"type": "A", "before": null, "after": "to obtain the", "start_char_pos": 424, "end_char_pos": 424}, {"type": "A", "before": null, "after": "exact distribution", "start_char_pos": 424, "end_char_pos": 424}, {"type": "A", "before": null, "after": "of the number of buffer starvations", "start_char_pos": 425, "end_char_pos": 425}, {"type": "R", "before": "Ballot theorem", "after": "Ballot theorem", "start_char_pos": 453, "end_char_pos": 467}, {"type": "R", "before": "solution, but at the cost of the high complexity order in certain circumstances", "after": "result. We extend this approach to the scenario with a constant playback rate using T\\`{a", "start_char_pos": 556, "end_char_pos": 635}, {"type": "R", "before": "needs fewer computations.We further propose a fluid analysis of starvation probability on the file level, given the distribution of file size and the traffic intensity. The starvation probabilities of this paper have", "after": "can obtain the distribution of starvations with non-independent and identically distributed (i.i.d.) arrival process in which an ON/OFF bursty arrival process is considered in this work. We further compute the starvation probability as a function of the amount of prefetched packets for a large number of files via a fluid analysis. Among", "start_char_pos": 702, "end_char_pos": 918}, {"type": "R", "before": ". We apply them", "after": "of starvation analysis, we show how to apply it", "start_char_pos": 947, "end_char_pos": 962}, {"type": "A", "before": null, "after": "objective", "start_char_pos": 979, "end_char_pos": 979}, {"type": "D", "before": "service", "after": null, "start_char_pos": 1027, "end_char_pos": 1034}, {"type": "R", "before": "the start-up delay and the starvation", "after": "startup/rebuffering delay and starvations", "start_char_pos": 1072, "end_char_pos": 1109}], "sents_char_pos": [0, 150, 283, 381, 509, 637, 727, 870, 948]} {"doc_id": "1109.0807", "revision_depth": "1", "before_revision": "Consider a large Boolean network with a feed forward structure. Given a probability distribution for the inputs, can one find-possibly small-collections of input nodes that determine the states of most other nodes in the network? To identify these nodes , a notion that quantifies the determinative power of an input over states in the network is needed. We argue that the mutual information (MI) between a subset of the inputs X = {X_1, ..., X_n} of node i and the function f_i(X) associated with node i quantifies the determinative power of this subset of inputs over node i. To study the relation of determinative power to sensitivity to perturbations , we relate the MI to measures of perturbations, such as the influence of a variable, in terms of inequalities. The result shows that, maybe surprisingly, an input that has large influence does not necessarily have large determinative power. The main tool for the analysis is Fourier analysis of Boolean functions. Whether a function is sensitive to perturbations or not, and which are the determinative inputs, depends on which coefficients the Fourier spectrum is concentrated on. We also consider unate functions which play an important role in genetic regulatory networks . For those, a particular relation between the influence and MI is found . As an application of our methods , we analyze the large-scale regulatory network of E. colinumerically: We identify the most determinative nodes and show that a small set of those reduces the overall uncertainty of network states significantly. The network is also found to be tolerant to perturbations of its inputs , which can be seen from the Fourier spectrum of its functions .", "after_revision": "Consider a large Boolean network with a feed forward structure. Given a probability distribution on the inputs, can one find, possibly small, collections of input nodes that determine the states of most other nodes in the network? To answer this question , a notion that quantifies the determinative power of an input over the states of the nodes in the network is needed. We argue that the mutual information (MI) between a given subset of the inputs X = {X_1, ..., X_n} of some node i and its associated function f_i(X) quantifies the determinative power of this set of inputs over node i. We compare the determinative power of a set of inputs to the sensitivity to perturbations to these inputs, and find that, maybe surprisingly, an input that has large sensitivity to perturbations does not necessarily have large determinative power. However, for unate functions, which play an important role in genetic regulatory networks , we find a direct relation between MI and sensitivity to perturbations . As an application of our results , we analyze the large-scale regulatory network of Escherichia coli. We identify the most determinative nodes and show that a small subset of those reduces the overall uncertainty of the network state significantly. Furthermore, the network is found to be tolerant to perturbations of its inputs .", "edit_actions": [{"type": "R", "before": "for", "after": "on", "start_char_pos": 97, "end_char_pos": 100}, {"type": "R", "before": "find-possibly small-collections", "after": "find, possibly small, collections", "start_char_pos": 121, "end_char_pos": 152}, {"type": "R", "before": "identify these nodes", "after": "answer this question", "start_char_pos": 233, "end_char_pos": 253}, {"type": "R", "before": "states", "after": "the states of the nodes", "start_char_pos": 322, "end_char_pos": 328}, {"type": "A", "before": null, "after": "given", "start_char_pos": 407, "end_char_pos": 407}, {"type": "A", "before": null, "after": "some", "start_char_pos": 452, "end_char_pos": 452}, {"type": "R", "before": "the", "after": "its associated", "start_char_pos": 464, "end_char_pos": 467}, {"type": "D", "before": "associated with node i", "after": null, "start_char_pos": 484, "end_char_pos": 506}, {"type": "R", "before": "subset", "after": "set", "start_char_pos": 550, "end_char_pos": 556}, {"type": "R", "before": "To study the relation of determinative power to", "after": "We compare the determinative power of a set of inputs to the", "start_char_pos": 580, "end_char_pos": 627}, {"type": "R", "before": ", we relate the MI to measures of perturbations, such as the influence of a variable, in terms of inequalities. The result shows", "after": "to these inputs, and find", "start_char_pos": 657, "end_char_pos": 785}, {"type": "R", "before": "influence", "after": "sensitivity to perturbations", "start_char_pos": 836, "end_char_pos": 845}, {"type": "R", "before": "The main tool for the analysis is Fourier analysis of Boolean functions. Whether a function is sensitive to perturbations or not, and which are the determinative inputs, depends on which coefficients the Fourier spectrum is concentrated on. We also consider unate functions which", "after": "However, for unate functions, which", "start_char_pos": 899, "end_char_pos": 1178}, {"type": "R", "before": ". For those, a particular relation between the influence and MI is found", "after": ", we find a direct relation between MI and sensitivity to perturbations", "start_char_pos": 1233, "end_char_pos": 1305}, {"type": "R", "before": "methods", "after": "results", "start_char_pos": 1333, "end_char_pos": 1340}, {"type": "R", "before": "E. colinumerically:", "after": "Escherichia coli.", "start_char_pos": 1392, "end_char_pos": 1411}, {"type": "R", "before": "set", "after": "subset", "start_char_pos": 1475, "end_char_pos": 1478}, {"type": "R", "before": "network states significantly. The network is also", "after": "the network state significantly. Furthermore, the network is", "start_char_pos": 1523, "end_char_pos": 1572}, {"type": "D", "before": ", which can be seen from the Fourier spectrum of its functions", "after": null, "start_char_pos": 1625, "end_char_pos": 1687}], "sents_char_pos": [0, 63, 229, 354, 579, 768, 898, 971, 1139, 1234, 1307, 1552]} {"doc_id": "1111.2169", "revision_depth": "1", "before_revision": "The theory of L\\'evy models for asset pricing simplifies considerably if one takes a pricing kernel approach, which enables one to bypass market incompleteness issues. The special case of a geometric L\\'evy model (GLM) with constant parameters can be regarded as a natural generalisation of the standard geometric Brownian motion model used in the Black-Scholes theory . In one dimension, once the underlying L\\'evy process has been specified, the GLM is characterised by four parameters: the initial asset price, the interest rate, the volatility, and a risk aversionfactor . The pricing kernel is given by the product of a discount factor and the Esscher martingale associated with the risk aversion parameter. The model is fixed by the requirement that for each asset the product of the asset price and the pricing kernel should be a martingale. In the GBMcase , the risk aversion factor is the so-called market price of risk. In the GLMcase , this interpretation is no longer valid; instead, the excess rate of return is given by a nonlinear function of the volatility and the risk aversion. It is shown that for positive values of the volatility and the risk aversion the excess rate of return above the interest rate is positive, and is monotonically increasing with respect to these variables. In the case of foreign exchange, Siegel's paradox implies that it is possible to construct foreign exchange models for which the excess rate of return (above the interest rate differential) is positive both for the exchange rate and the inverse exchange rate. This condition is shown to hold for any geometric L\\'evy model for foreign exchange in which the volatility exceeds the risk aversion.", "after_revision": "The geometric L\\'evy model (GLM) is a natural generalisation of the geometric Brownian motion model (GBM) used in the derivation of the Black-Scholes formula. The theory of such models simplifies considerably if one takes a pricing kernel approach . In one dimension, once the underlying L\\'evy process has been specified, the GLM has four parameters: the initial price, the interest rate, the volatility, and the risk aversion . The pricing kernel is the product of a discount factor and a risk aversion martingale. For GBM , the risk aversion parameter is the market price of risk. For a GLM , this interpretation is not valid: the excess rate of return is a nonlinear function of the volatility and the risk aversion. It is shown that for positive volatility and risk aversion the excess rate of return above the interest rate is positive, and is increasing with respect to these variables. In the case of foreign exchange, Siegel's paradox implies that one can construct foreign exchange models for which the excess rate of return is positive both for the exchange rate and the inverse exchange rate. This condition is shown to hold for any geometric L\\'evy model for foreign exchange in which volatility exceeds risk aversion.", "edit_actions": [{"type": "D", "before": "theory of L\\'evy models for asset pricing simplifies considerably if one takes a pricing kernel approach, which enables one to bypass market incompleteness issues. The special case of a", "after": null, "start_char_pos": 4, "end_char_pos": 189}, {"type": "R", "before": "with constant parameters can be regarded as", "after": "is", "start_char_pos": 219, "end_char_pos": 262}, {"type": "D", "before": "standard", "after": null, "start_char_pos": 295, "end_char_pos": 303}, {"type": "A", "before": null, "after": "(GBM)", "start_char_pos": 336, "end_char_pos": 336}, {"type": "A", "before": null, "after": "derivation of the", "start_char_pos": 349, "end_char_pos": 349}, {"type": "R", "before": "theory", "after": "formula. The theory of such models simplifies considerably if one takes a pricing kernel approach", "start_char_pos": 364, "end_char_pos": 370}, {"type": "R", "before": "is characterised by", "after": "has", "start_char_pos": 454, "end_char_pos": 473}, {"type": "D", "before": "asset", "after": null, "start_char_pos": 503, "end_char_pos": 508}, {"type": "R", "before": "a risk aversionfactor", "after": "the risk aversion", "start_char_pos": 555, "end_char_pos": 576}, {"type": "D", "before": "given by", "after": null, "start_char_pos": 601, "end_char_pos": 609}, {"type": "R", "before": "the Esscher martingale associated with the risk aversion parameter. The model is fixed by the requirement that for each asset the product of the asset price and the pricing kernel should be a martingale. In the GBMcase", "after": "a risk aversion martingale. For GBM", "start_char_pos": 647, "end_char_pos": 865}, {"type": "R", "before": "factor is the so-called", "after": "parameter is the", "start_char_pos": 886, "end_char_pos": 909}, {"type": "R", "before": "In the GLMcase", "after": "For a GLM", "start_char_pos": 932, "end_char_pos": 946}, {"type": "R", "before": "no longer valid; instead,", "after": "not valid:", "start_char_pos": 972, "end_char_pos": 997}, {"type": "D", "before": "given by", "after": null, "start_char_pos": 1027, "end_char_pos": 1035}, {"type": "R", "before": "values of the volatility and the", "after": "volatility and", "start_char_pos": 1128, "end_char_pos": 1160}, {"type": "D", "before": "monotonically", "after": null, "start_char_pos": 1245, "end_char_pos": 1258}, {"type": "R", "before": "it is possible to", "after": "one can", "start_char_pos": 1366, "end_char_pos": 1383}, {"type": "D", "before": "(above the interest rate differential)", "after": null, "start_char_pos": 1454, "end_char_pos": 1492}, {"type": "R", "before": "the volatility exceeds the", "after": "volatility exceeds", "start_char_pos": 1656, "end_char_pos": 1682}], "sents_char_pos": [0, 167, 372, 578, 714, 850, 931, 988, 1097, 1302, 1562]} {"doc_id": "1112.0270", "revision_depth": "2", "before_revision": "Activation cascadesare a prevalent feature in cellular mechanisms for signaltransduction . Here we study the classic model of linear activation cascades and obtain analytical solutions in terms of lower incomplete gamma functions . We show that in the special but important case of optimal gain cascades (i.e., when all the deactivation rates are identical) the downstream output of an entire cascade can be represented exactly as a single nonlinear module containing an incomplete gamma function with parameters dependent on the input signal as well as the rates and length of the cascade . Our results can be used to represent optimal cascades efficiently by reducing the number of equations and parameters in computational ODE models under a variety of inputs. If the requirement for strict optimality is relaxed (under random deactivation rates ), we show that the reduced representation can also reproduce the observed variability of downstream responses. In addition, we show that cascades can be rearranged so that homogeneous blocks can be lumped and represented by incomplete gamma functions. We also illustrate how the reduced representation can be used to fit data; in particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we use our results to show how the output of delay differential equation models can be approximated with the use of simple expressions involving the incomplete gamma function .", "after_revision": "Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal . Here we study the classic model of weakly activated cascades and obtain analytical solutions for a variety of inputs . We show that in the special but important case of optimal-gain cascades (i.e., when the deactivation rates are identical) the downstream output of the cascade can be represented exactly as a lumped nonlinear module containing an incomplete gamma function with real parameters that depend on the rates and length of the cascade , as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also show that cascades can be rearranged so that blocks with similar rates can be lumped and represented through our nonlinear modules. Our results can be used both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we show how the obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction .", "edit_actions": [{"type": "R", "before": "Activation cascadesare a prevalent feature in cellular mechanisms for signaltransduction", "after": "Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal", "start_char_pos": 0, "end_char_pos": 88}, {"type": "R", "before": "linear activation", "after": "weakly activated", "start_char_pos": 126, "end_char_pos": 143}, {"type": "R", "before": "in terms of lower incomplete gamma functions", "after": "for a variety of inputs", "start_char_pos": 185, "end_char_pos": 229}, {"type": "R", "before": "optimal gain", "after": "optimal-gain", "start_char_pos": 282, "end_char_pos": 294}, {"type": "D", "before": "all", "after": null, "start_char_pos": 316, "end_char_pos": 319}, {"type": "R", "before": "an entire", "after": "the", "start_char_pos": 383, "end_char_pos": 392}, {"type": "R", "before": "single", "after": "lumped", "start_char_pos": 433, "end_char_pos": 439}, {"type": "R", "before": "parameters dependent on the input signal as well as the", "after": "real parameters that depend on the", "start_char_pos": 502, "end_char_pos": 557}, {"type": "R", "before": ". Our results can be used to represent optimal cascades efficiently by reducing the number of equations and parameters in computational ODE models under a variety of inputs. If the requirement for strict optimality is relaxed (under random deactivation rates ), we show that the reduced representation can also reproduce the observed variability of downstream responses. In addition, we", "after": ", as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also", "start_char_pos": 590, "end_char_pos": 976}, {"type": "R", "before": "homogeneous blocks", "after": "blocks with similar rates", "start_char_pos": 1022, "end_char_pos": 1040}, {"type": "R", "before": "by incomplete gamma functions. We also illustrate how the reduced representation", "after": "through our nonlinear modules. Our results", "start_char_pos": 1071, "end_char_pos": 1151}, {"type": "R", "before": "to fit data; in", "after": "both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In", "start_char_pos": 1164, "end_char_pos": 1179}, {"type": "D", "before": "use our results to", "after": null, "start_char_pos": 1329, "end_char_pos": 1347}, {"type": "R", "before": "output of delay differential equation models can be approximated with the use of simple expressions involving the incomplete gamma function", "after": "obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction", "start_char_pos": 1361, "end_char_pos": 1500}], "sents_char_pos": [0, 90, 231, 591, 763, 960, 1101, 1176, 1316]} {"doc_id": "1112.2250", "revision_depth": "1", "before_revision": "The Web emerged as the antidoteto rapidly increasing quantity of accumulated knowledge because it successfully enables massive representation and communication with minimum costs. Despite the fact that its gigantic scale and impact make difficult to anticipate the effects in humans, we claim from it to be fast, secure, reliable, all-inclusive and trustworthy . It is time for science to compensate and provide an epistemological \"antidote\" to these issues. On this campaign, Philosophy should be in the front line by forming the relevant questions . We initiate the dialogue for a theory about being in the Web that will serve as a bridge between philosophical thinking and engineering. We analyze existence and spatiotemporality in the Web , as a closed techno-social system, and how it transforms the traditional conceptions about actuality. Location in the Web space is specified by the Web being's URI and the URI's of incoming and outgoing links. The primer role of visiting durations is best approximated by Bergsonian time. Physical space is becoming more discoverableand traceable. Components of human activity are becoming asynchronous, (partially) synchronous and continuous in the Web . Networked individuals operate in a flexible , less-bounded and spatially dispersed environment. The resulting issues are related to the self-determination of being in the Web and the prerequisites for the Web to remain an open platform for innovation and participation . The Being-Query model address the above issues, based on a simple and all-encompassing hypothesis about existence and the generic motive of searching. The proposed approach incorporates that Users and Being are two sides of the same coin by providing a computable framework of study. Furthermore, extents ANT to include the Web ecosystem and highlights the need for joint analysis of online and offline phenomena .", "after_revision": "The Web initially emerged as an \"antidote\" to accumulated scientific knowledge since it enables global representation and communication with minimum costs. Its gigantic scale and interdependence incommode our ability to find relevant information and develop trustworthy contexts . It is time for science to compensate by providing an epistemological \"antidote\" to Web issues. Philosophy should be in the front line by forming the salient questions and analysis. The scope of our research is to provide a theory about the Web being that will bridge philosophical thinking and engineering. We analyze existence and spatiotemporality in the Web and how it transforms the traditional actualities. The Web space is specified by incoming and outgoing links. The primordial role of visiting durations in Web's existence is approximated by Bergsonian time. The physical space becomes more discoverable. The human activity can be asynchronous, synchronous and continuous . Networked individuals operate in a flexible and spatially dispersed environment. The resulting issues concern the self-determination of a being and the way in which the Web could be a free and open platform for innovation and participation .", "edit_actions": [{"type": "R", "before": "emerged as the antidoteto rapidly increasing quantity of accumulated knowledge because it successfully enables massive", "after": "initially emerged as an \"antidote\" to accumulated scientific knowledge since it enables global", "start_char_pos": 8, "end_char_pos": 126}, {"type": "R", "before": "Despite the fact that its", "after": "Its", "start_char_pos": 180, "end_char_pos": 205}, {"type": "R", "before": "impact make difficult to anticipate the effects in humans, we claim from it to be fast, secure, reliable, all-inclusive and trustworthy", "after": "interdependence incommode our ability to find relevant information and develop trustworthy contexts", "start_char_pos": 225, "end_char_pos": 360}, {"type": "R", "before": "and provide", "after": "by providing", "start_char_pos": 400, "end_char_pos": 411}, {"type": "R", "before": "these issues. On this campaign,", "after": "Web issues.", "start_char_pos": 445, "end_char_pos": 476}, {"type": "R", "before": "relevant questions . We initiate the dialogue for", "after": "salient questions and analysis. The scope of our research is to provide", "start_char_pos": 531, "end_char_pos": 580}, {"type": "R", "before": "being in the Web that will serve as a bridge between", "after": "the Web being that will bridge", "start_char_pos": 596, "end_char_pos": 648}, {"type": "D", "before": ", as a closed techno-social system,", "after": null, "start_char_pos": 743, "end_char_pos": 778}, {"type": "R", "before": "conceptions about actuality. Location in the", "after": "actualities. The", "start_char_pos": 817, "end_char_pos": 861}, {"type": "D", "before": "the Web being's URI and the URI's of", "after": null, "start_char_pos": 888, "end_char_pos": 924}, {"type": "R", "before": "primer", "after": "primordial", "start_char_pos": 958, "end_char_pos": 964}, {"type": "R", "before": "is best", "after": "in Web's existence is", "start_char_pos": 992, "end_char_pos": 999}, {"type": "R", "before": "Physical space is becoming more discoverableand traceable. Components of human activity are becoming asynchronous, (partially)", "after": "The physical space becomes more discoverable. The human activity can be asynchronous,", "start_char_pos": 1033, "end_char_pos": 1159}, {"type": "D", "before": "in the Web", "after": null, "start_char_pos": 1187, "end_char_pos": 1197}, {"type": "D", "before": ", less-bounded", "after": null, "start_char_pos": 1244, "end_char_pos": 1258}, {"type": "R", "before": "are related to", "after": "concern", "start_char_pos": 1317, "end_char_pos": 1331}, {"type": "R", "before": "being in the Web and the prerequisites for the Web to remain an", "after": "a being and the way in which the Web could be a free and", "start_char_pos": 1358, "end_char_pos": 1421}, {"type": "D", "before": ". The Being-Query model address the above issues, based on a simple and all-encompassing hypothesis about existence and the generic motive of searching. The proposed approach incorporates that Users and Being are two sides of the same coin by providing a computable framework of study. Furthermore, extents ANT to include the Web ecosystem and highlights the need for joint analysis of online and offline phenomena", "after": null, "start_char_pos": 1469, "end_char_pos": 1883}], "sents_char_pos": [0, 179, 362, 458, 551, 688, 845, 953, 1032, 1091, 1295, 1470, 1621, 1754]} {"doc_id": "1202.1854", "revision_depth": "1", "before_revision": "This paper proposes generalization of the popular realized volatility framework by allowing its measurement in the time-frequency domain and bringing robustness to both noise as well as jumps. Based on the generalization of Fan and Wang (2007) approach using smooth wavelets and Maximum Overlap Discrete Wavelet Transform, we present new, general theory for wavelet decomposition of integrated variance. Using wavelets, we not only gain decomposition of the realized variance into several investment horizons , but we are also able to estimate the jumpsconsistently . Basing our estimator in the two-scale realized variance framework of Zhang et al. (2005) , we are able to utilize all available data and get unbiased estimator in the presence of noise as well. The theory is also tested in a large numerical study of the small sample performance of the estimators and compared to other popular realized variation estimators under different simulation settings with changing noise as well as jump level . The results reveal that our wavelet-based estimator is able to estimate and forecast the realized measures with the greatest precision. Another notable contribution lies in the application of the presented theory. Our time-frequency estimators not only produce more efficient estimates, but also decompose the realized variation into arbitrarily chosen investment horizons. The results thus provide a better understanding of the dynamics of stock markets .", "after_revision": "We introduce wavelet-based methodology for estimation of realized variance allowing its measurement in the time-frequency domain . Using smooth wavelets and Maximum Overlap Discrete Wavelet Transform, we allow for the decomposition of the realized variance into several investment horizons and jumps . Basing our estimator in the two-scale realized variance framework , we are able to utilize all available data and get feasible estimator in the presence of microstructure noise as well. The estimator is tested in a large numerical study of the finite sample performance and is compared to other popular realized variation estimators . We use different simulation settings with changing noise as well as jump level in different price processes including long memory fractional stochastic volatility model . The results reveal that our wavelet-based estimator is able to estimate and forecast the realized measures with the greatest precision. Our time-frequency estimators not only produce feasible estimates, but also decompose the realized variation into arbitrarily chosen investment horizons. We apply it to study the volatility of forex futures during the recent crisis at several investment horizons and obtain the results which provide us with better understanding of the volatility dynamics .", "edit_actions": [{"type": "R", "before": "This paper proposes generalization of the popular realized volatility framework by", "after": "We introduce wavelet-based methodology for estimation of realized variance", "start_char_pos": 0, "end_char_pos": 82}, {"type": "R", "before": "and bringing robustness to both noise as well as jumps. Based on the generalization of Fan and Wang (2007) approach using", "after": ". Using", "start_char_pos": 137, "end_char_pos": 258}, {"type": "R", "before": "present new, general theory for wavelet decomposition of integrated variance. Using wavelets, we not only gain decomposition of", "after": "allow for the decomposition of", "start_char_pos": 326, "end_char_pos": 453}, {"type": "R", "before": ", but we are also able to estimate the jumpsconsistently", "after": "and jumps", "start_char_pos": 509, "end_char_pos": 565}, {"type": "D", "before": "of Zhang et al. (2005)", "after": null, "start_char_pos": 634, "end_char_pos": 656}, {"type": "R", "before": "unbiased", "after": "feasible", "start_char_pos": 709, "end_char_pos": 717}, {"type": "A", "before": null, "after": "microstructure", "start_char_pos": 747, "end_char_pos": 747}, {"type": "R", "before": "theory is also", "after": "estimator is", "start_char_pos": 767, "end_char_pos": 781}, {"type": "R", "before": "small sample performance of the estimators and", "after": "finite sample performance and is", "start_char_pos": 823, "end_char_pos": 869}, {"type": "R", "before": "under", "after": ". We use", "start_char_pos": 926, "end_char_pos": 931}, {"type": "A", "before": null, "after": "in different price processes including long memory fractional stochastic volatility model", "start_char_pos": 1004, "end_char_pos": 1004}, {"type": "D", "before": "Another notable contribution lies in the application of the presented theory.", "after": null, "start_char_pos": 1143, "end_char_pos": 1220}, {"type": "R", "before": "more efficient", "after": "feasible", "start_char_pos": 1268, "end_char_pos": 1282}, {"type": "R", "before": "The results thus provide a", "after": "We apply it to study the volatility of forex futures during the recent crisis at several investment horizons and obtain the results which provide us with", "start_char_pos": 1381, "end_char_pos": 1407}, {"type": "R", "before": "dynamics of stock markets", "after": "volatility dynamics", "start_char_pos": 1436, "end_char_pos": 1461}], "sents_char_pos": [0, 192, 403, 762, 1006, 1142, 1220, 1380]} {"doc_id": "1205.1861", "revision_depth": "1", "before_revision": " Recently, many studies indicated that the minimum spanning tree (MST) network whose metric distance is de?ned by using correlation coe?cients have strong implications on extracting infor- mation from return time series. However in many cases researchers may hope to investigate the strength of interactions but not the directions of them. In order to study the strength of interaction and connection of ?nancial asset returns we propose a modi?ed minimum spanning tree network whose metric distance is de?ned from absolute cross-correlation coe?cients. We had investigated 69 daily ?nancial time series, which constituted by 3 types ?nance assets(29 stock market indica- tor time series , 21 currency futures price time series and 19 commodity futuresprice time series). Empirical analyses show that the MST network of returns is time-dependent in overall structure, while same type ?nancial assets usually keep stable inter-connections. Moreover each asset in same group show similar economic characters. In other words, each group concerned with one kind of traditional ?nancial commodity. In addition, we ?nd the time-lag between stock market indicator volatility time series and EUA (EU allowances) , WTI (West Texas Intermediate)volatility time series. The peak of cross-correlation function of volatility time series between EUA (or WTI) and stock market indicators show a signi?cant time shift (> 20days) from 0.", "after_revision": "In a highly interdependent economic world, the nature of relationships between financial entities is becoming an increasingly important area of study. Recently, many studies have shown the usefulness of minimal spanning trees (MST) in extracting interactions between financial entities. Here, we propose a modified MST network whose metric distance is defined in terms of cross-correlation coefficient absolute values, enabling the connections between anticorrelated entities to manifest properly. We investigate 69 daily time series, comprising three types of financial assets: 28 stock market indicators , 21 currency futures , and 20 commodity futures. We show that though the resulting MST network evolves over time, the financial assets of similar type tend to have connections which are stable over time. In addition, we find a characteristic time lag between the volatility time series of the stock market indicators and those of the EU CO2 emission allowance (EUA) and crude oil futures (WTI). This time lag is given by the peak of the cross-correlation function of the volatility time series EUA (or WTI) with that of the stock market indicators , and is markedly different (> 20 days) from 0, showing that the volatility of stock market indicators today can predict the volatility of EU emissions allowances and of crude oil in the near future.", "edit_actions": [{"type": "A", "before": null, "after": "In a highly interdependent economic world, the nature of relationships between financial entities is becoming an increasingly important area of study.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "indicated that the minimum spanning tree", "after": "have shown the usefulness of minimal spanning trees", "start_char_pos": 24, "end_char_pos": 64}, {"type": "R", "before": "network whose metric distance is de?ned by using correlation coe?cients have strong implications on extracting infor- mation from return time series. However in many cases researchers may hope to investigate the strength of interactions but not the directions of them. In order to study the strength of interaction and connection of ?nancial asset returns", "after": "in extracting interactions between financial entities. Here,", "start_char_pos": 71, "end_char_pos": 426}, {"type": "R", "before": "modi?ed minimum spanning tree", "after": "modified MST", "start_char_pos": 440, "end_char_pos": 469}, {"type": "R", "before": "de?ned from absolute", "after": "defined in terms of", "start_char_pos": 503, "end_char_pos": 523}, {"type": "R", "before": "coe?cients. We had investigated", "after": "coefficient absolute values, enabling the connections between anticorrelated entities to manifest properly. We investigate", "start_char_pos": 542, "end_char_pos": 573}, {"type": "D", "before": "?nancial", "after": null, "start_char_pos": 583, "end_char_pos": 591}, {"type": "R", "before": "which constituted by 3 types ?nance assets(29 stock market indica- tor time series", "after": "comprising three types of financial assets: 28 stock market indicators", "start_char_pos": 605, "end_char_pos": 687}, {"type": "R", "before": "price time series and 19 commodity futuresprice time series). Empirical analyses show that the MST network of returns is time-dependent in overall structure, while same type ?nancial assets usually keep stable inter-connections. Moreover each asset in same group show similar economic characters. In other words, each group concerned with one kind of traditional ?nancial commodity. In", "after": ", and 20 commodity futures. We show that though the resulting MST network evolves over time, the financial assets of similar type tend to have connections which are stable over time. In", "start_char_pos": 710, "end_char_pos": 1095}, {"type": "R", "before": "?nd the time-lag between stock market indicator", "after": "find a characteristic time lag between the", "start_char_pos": 1109, "end_char_pos": 1156}, {"type": "R", "before": "and EUA (EU allowances) , WTI (West Texas Intermediate)volatility time series. The peak of", "after": "of the stock market indicators and those of the EU CO2 emission allowance (EUA) and crude oil futures (WTI). This time lag is given by the peak of the", "start_char_pos": 1180, "end_char_pos": 1270}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1301, "end_char_pos": 1301}, {"type": "D", "before": "between", "after": null, "start_char_pos": 1325, "end_char_pos": 1332}, {"type": "R", "before": "and", "after": "with that of the", "start_char_pos": 1346, "end_char_pos": 1349}, {"type": "R", "before": "show a signi?cant time shift", "after": ", and is markedly different", "start_char_pos": 1374, "end_char_pos": 1402}, {"type": "R", "before": "20days) from 0.", "after": "20 days) from 0, showing that the volatility of stock market indicators today can predict the volatility of EU emissions allowances and of crude oil in the near future.", "start_char_pos": 1406, "end_char_pos": 1421}], "sents_char_pos": [0, 220, 339, 553, 771, 938, 1006, 1092, 1258]} {"doc_id": "1206.4087", "revision_depth": "1", "before_revision": " Homology detection is critical to genomics. Identifying homologous sequence allows us to transfer information gathered in URLanism to another quickly and with a high degree of confidence. Non-coding RNA (ncRNA ) presents a challenge for homology detection, as the primary sequence is often poorly conserved and de novo structure prediction remains difficult. This chapter introduces methods developed by the Rfam database for identifying \"families\" of homologous ncRNAs from single \"seed\" sequences using manually curated alignments to build powerful statistical models known as covariance models (CMs) . We provide a brief overview of the state of alignment and secondary structure prediction algorithms. This is followed by a step-by-step iterative protocol for identifying homologs, then constructing an alignment and corresponding CM. We also work through an example , building an alignment and CM for the bacterial small RNA MicA, discovering a previously unreported family of divergent MicA homologs in Xenorhabdus in the process . This chapter will provide readers with the background necessary to begin defining their own ncRNA families suitable for use in comparative, functional, and evolutionary studies of structured RNA elements .", "after_revision": "Emerging high-throughput technologies have led to a deluge of putative non-coding RNA (ncRNA) sequences identified in a wide variety URLanisms. Systematic characterization of these transcripts will be a tremendous challenge. Homology detection is critical to making maximal use of functional information gathered about ncRNAs: identifying homologous sequence allows us to transfer information gathered in URLanism to another quickly and with a high degree of confidence. ncRNA presents a challenge for homology detection, as the primary sequence is often poorly conserved and de novo secondary structure prediction and search remains difficult. This protocol introduces methods developed by the Rfam database for identifying \"families\" of homologous ncRNAs starting from single \"seed\" sequences using manually curated sequence alignments to build powerful statistical models of sequence and structure conservation known as covariance models (CMs) , implemented in the Infernal software package . We provide a step-by-step iterative protocol for identifying ncRNA homologs, then constructing an alignment and corresponding CM. We also work through an example for the bacterial small RNA MicA, discovering a previously unreported family of divergent MicA homologs in genus Xenorhabdus in the process .", "edit_actions": [{"type": "A", "before": null, "after": "Emerging high-throughput technologies have led to a deluge of putative non-coding RNA (ncRNA) sequences identified in a wide variety URLanisms. Systematic characterization of these transcripts will be a tremendous challenge.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "genomics. Identifying", "after": "making maximal use of functional information gathered about ncRNAs: identifying", "start_char_pos": 35, "end_char_pos": 56}, {"type": "R", "before": "Non-coding RNA (ncRNA )", "after": "ncRNA", "start_char_pos": 189, "end_char_pos": 212}, {"type": "R", "before": "structure prediction", "after": "secondary structure prediction and search", "start_char_pos": 320, "end_char_pos": 340}, {"type": "R", "before": "chapter", "after": "protocol", "start_char_pos": 365, "end_char_pos": 372}, {"type": "A", "before": null, "after": "starting", "start_char_pos": 471, "end_char_pos": 471}, {"type": "A", "before": null, "after": "sequence", "start_char_pos": 524, "end_char_pos": 524}, {"type": "A", "before": null, "after": "of sequence and structure conservation", "start_char_pos": 573, "end_char_pos": 573}, {"type": "A", "before": null, "after": ", implemented in the Infernal software package", "start_char_pos": 607, "end_char_pos": 607}, {"type": "D", "before": "brief overview of the state of alignment and secondary structure prediction algorithms. This is followed by a", "after": null, "start_char_pos": 623, "end_char_pos": 732}, {"type": "A", "before": null, "after": "ncRNA", "start_char_pos": 781, "end_char_pos": 781}, {"type": "D", "before": ", building an alignment and CM", "after": null, "start_char_pos": 877, "end_char_pos": 907}, {"type": "A", "before": null, "after": "genus", "start_char_pos": 1015, "end_char_pos": 1015}, {"type": "D", "before": ". This chapter will provide readers with the background necessary to begin defining their own ncRNA families suitable for use in comparative, functional, and evolutionary studies of structured RNA elements", "after": null, "start_char_pos": 1043, "end_char_pos": 1248}], "sents_char_pos": [0, 44, 188, 359, 609, 710, 844, 1044]} {"doc_id": "1208.1277", "revision_depth": "1", "before_revision": "In this paper the complex systems are discussed from the business and economics point of view . It will be showed and motivated that social systems are typically chaotic and/or non-linear and therefore non-equilibrium or complex systems. It is discussed that the rapid change global consumer behaviour is underway, that further increases the complexity in the business and management. For successful management under complexity, a following principles are offered: openness and international competition, tolerance and variety of the ideas, self-reliability and low dependence on external help. The paper discusses the opportunities and challenges in management under complexity from macro and micro economic perspective. It is motivated that small economies have good prospects to gain from the global processes underway, if they can demonstrate flexible production , reliable business ethics and good risk management. In this environment, the challenges for corporate managements are being also permanently changed: the balance between short term noise and long term chaos whose attractor includes customers, shareholders and employees must be found. The paper is concluded with the two financial applications: about debt and risk management. The non-equilibrium economic establishment leads to additional problems by using excessive borrowing; unexpected downturns in economy can more easily kill companies. Finally, the demand for quantitative improvements in risk management is postulated . Development of the financial markets has triggered non-linearity to spike in prices of various production articles such as agricultural and other commodities that has added market risk management to the business model of many companies .", "after_revision": "In this chapter the complex systems are discussed in the context of economic and business policy and decision making . It will be showed and motivated that social systems are typically chaotic , non-linear and/or non-equilibrium and therefore complex systems. It is discussed that the rapid change in global consumer behaviour is underway, that further increases the complexity in business and management. For policy making under complexity, following principles are offered: openness and international competition, tolerance and variety of ideas, self-reliability and low dependence on external help. The chapter contains four applications that build on the theoretical motivation of complexity in social systems. The first application demonstrates that small economies have good prospects to gain from the global processes underway, if they can demonstrate production flexibility , reliable business ethics and good risk management. The second application elaborates on and discusses the opportunities and challenges in decision making under complexity from macro and micro economic perspective. In this environment, the challenges for corporate management are being also permanently changed: the balance between short term noise and long term chaos whose attractor includes customers, shareholders and employees must be found. The emergence of chaos in economic relationships is demonstrated by a simple system of differential equations that relate the stakeholders described above. The chapter concludes with two financial applications: about debt and risk management. The non-equilibrium economic establishment leads to additional problems by using excessive borrowing; unexpected downturns in economy can more easily kill companies. Finally, the demand for quantitative improvements in risk management is postulated .", "edit_actions": [{"type": "R", "before": "paper", "after": "chapter", "start_char_pos": 8, "end_char_pos": 13}, {"type": "R", "before": "from the business and economics point of view", "after": "in the context of economic and business policy and decision making", "start_char_pos": 48, "end_char_pos": 93}, {"type": "A", "before": null, "after": ", non-linear", "start_char_pos": 170, "end_char_pos": 170}, {"type": "R", "before": "non-linear and therefore non-equilibrium or", "after": "non-equilibrium and therefore", "start_char_pos": 178, "end_char_pos": 221}, {"type": "A", "before": null, "after": "in", "start_char_pos": 277, "end_char_pos": 277}, {"type": "D", "before": "the", "after": null, "start_char_pos": 358, "end_char_pos": 361}, {"type": "R", "before": "successful management", "after": "policy making", "start_char_pos": 391, "end_char_pos": 412}, {"type": "D", "before": "a", "after": null, "start_char_pos": 431, "end_char_pos": 432}, {"type": "D", "before": "the", "after": null, "start_char_pos": 532, "end_char_pos": 535}, {"type": "R", "before": "paper discusses the opportunities and challenges in management under complexity from macro and micro economic perspective. It is motivated", "after": "chapter contains four applications that build on the theoretical motivation of complexity in social systems. The first application demonstrates", "start_char_pos": 601, "end_char_pos": 739}, {"type": "R", "before": "flexible production", "after": "production flexibility", "start_char_pos": 849, "end_char_pos": 868}, {"type": "A", "before": null, "after": "The second application elaborates on and discusses the opportunities and challenges in decision making under complexity from macro and micro economic perspective.", "start_char_pos": 922, "end_char_pos": 922}, {"type": "R", "before": "managements", "after": "management", "start_char_pos": 973, "end_char_pos": 984}, {"type": "R", "before": "paper is concluded with the", "after": "emergence of chaos in economic relationships is demonstrated by a simple system of differential equations that relate the stakeholders described above. The chapter concludes with", "start_char_pos": 1160, "end_char_pos": 1187}, {"type": "D", "before": ". Development of the financial markets has triggered non-linearity to spike in prices of various production articles such as agricultural and other commodities that has added market risk management to the business model of many companies", "after": null, "start_char_pos": 1497, "end_char_pos": 1734}], "sents_char_pos": [0, 95, 238, 386, 596, 723, 921, 1155, 1247, 1349, 1413, 1498]} {"doc_id": "1208.5520", "revision_depth": "2", "before_revision": "In the present work, a novel second-order approximation for ATM option prices is derived for a large class of exponential L\\'evy models . Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump componentof the log-return%DIFDELCMD < } %%% process becomes a Y-stable process. Our approach is sufficiently general to cover a wide class of L\\'evy processes which satisfy the latter property and whose L\\'evy densities can be closely approximated by a stable density near the origin. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of an additional Brownian component, the second-order term, in time-t, is of the form d_{2} t^{(3-Y)/2}, with the coefficient d_{2} depending only on the overall jump intensity of the process and the tail-heaviness parameter Y. This extends the already%DIFDELCMD < } %%% known result that the leading term is \\sigma t^{1/2/2\\pi, where }%DIFDELCMD < \\sigmais %%% the volatility of the continuous component . In contrast, under such a pure-jump model, the dependence on the overall jump intensity and Y is already reflected in the leading term, which is of the form d_{1}t^{1/Y}. The information on the relative frequency of negative and positive jumps appears only in the second-order term , which is shown to be of the form d_{2} t and whose order of decay turns out to be independent of Y. The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed. Our numerical results show that first-order term typically exhibits rather poor performance and that the second-order term significantly improves the approximation's accuracy.", "after_revision": "In the present work, a novel second-order approximation for ATM option prices is derived for a large class of exponential L\\'evy models %DIFDELCMD < } %%% with or without Brownian component. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of a Brownian component, the second-order term, in time-t, is of the form d_{2} t^{(3-Y)/2}, with the coefficient d_{2} depending only on %DIFDELCMD < } %%% /2\\pi, where }%DIFDELCMD < \\sigmais %%% Y, the degree of jump activity, on \\sigma, the volatility of the continuous component , and on an additional parameter controlling the intensity of the \"small\" jumps (regardless of their signs) . In contrast, under a pure-jump model, the dependence on Y and on the separate intensities of negative and positive small jumps are already reflected in the leading term, which is of the form d_{1}t^{1/Y}. The second-order term is shown to be of the form d_{2} t and , therefore, its order of decay turns out to be independent of Y. The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed. Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump component of the log-return process becomes a Y-stable process. Our approach is sufficiently general to cover a wide class of L\\'evy processes which satisfy the latter property and whose L\\'evy densitiy can be closely approximated by a stable density near the origin. Our numerical results show that first-order term typically exhibits rather poor performance and that the second-order term significantly improves the approximation's accuracy.", "edit_actions": [{"type": "D", "before": ". Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the", "after": null, "start_char_pos": 136, "end_char_pos": 358}, {"type": "D", "before": "pure-jump componentof the log-return", "after": null, "start_char_pos": 359, "end_char_pos": 395}, {"type": "R", "before": "process becomes a Y-stable process. Our approach is sufficiently general to cover a wide class of L\\'evy processes which satisfy the latter property and whose L\\'evy densities can be closely approximated by a stable density near the origin.", "after": "with or without Brownian component.", "start_char_pos": 414, "end_char_pos": 654}, {"type": "R", "before": "an additional", "after": "a", "start_char_pos": 864, "end_char_pos": 877}, {"type": "D", "before": "the overall jump intensity of the process and the tail-heaviness parameter Y. This extends the", "after": null, "start_char_pos": 1011, "end_char_pos": 1105}, {"type": "D", "before": "already", "after": null, "start_char_pos": 1106, "end_char_pos": 1113}, {"type": "D", "before": "known result that the leading term is \\sigma t^{1/2", "after": null, "start_char_pos": 1132, "end_char_pos": 1183}, {"type": "A", "before": null, "after": "Y, the degree of jump activity, on \\sigma,", "start_char_pos": 1223, "end_char_pos": 1223}, {"type": "A", "before": null, "after": ", and on an additional parameter controlling the intensity of the \"small\" jumps (regardless of their signs)", "start_char_pos": 1267, "end_char_pos": 1267}, {"type": "D", "before": "such", "after": null, "start_char_pos": 1289, "end_char_pos": 1293}, {"type": "R", "before": "the overall jump intensity and Y is", "after": "Y and on the separate intensities of negative and positive small jumps are", "start_char_pos": 1331, "end_char_pos": 1366}, {"type": "D", "before": "information on the relative frequency of negative and positive jumps appears only in the", "after": null, "start_char_pos": 1445, "end_char_pos": 1533}, {"type": "D", "before": ", which", "after": null, "start_char_pos": 1552, "end_char_pos": 1559}, {"type": "R", "before": "whose", "after": ", therefore, its", "start_char_pos": 1599, "end_char_pos": 1604}, {"type": "A", "before": null, "after": "method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump component of the log-return process becomes a Y-stable process. Our approach is sufficiently general to cover a wide class of L\\'evy processes which satisfy the latter property and whose L\\'evy densitiy can be closely approximated by a stable density near the origin. Our", "start_char_pos": 1757, "end_char_pos": 1757}, {"type": "R", "before": "rather", "after": "rather", "start_char_pos": 1822, "end_char_pos": 1828}], "sents_char_pos": [0, 137, 449, 654, 844, 1088, 1269, 1440, 1653, 1752]} {"doc_id": "1209.3924", "revision_depth": "1", "before_revision": "Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia (i.e. raising the temperature of a region of the body affected by cancer) have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat shock response mechanism (HSR), which main component (the heat shock proteins) is known to directly prevent the intended apoptosis of cancer cells. Moreover, cancer cells can have an already partially activated HSR, thereby hyperthermia may be more toxic to them relative to normal cells. However , HSR triggers thermotolerance, i.e. the hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. For that reason, the application of the combined hyperthermia therapy should be carefully examined . We adapt our previous HSR model and propose its stochastic extension, which we then analyze using the approximate probabilistic model checking (APMC) technique. We formalize the notion of the thermotolerance and compute the size and the duration of the HSR-induced thermotolerance. We quantify the effect of a combined therapy of hyperthermia and a cytotoxic inhibition of proteins refolding . By mechanistic modeling of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy. Moreover, our results demonstrate feasibility and practical potential of APMC in analysis of stochastic models of signaling pathways.", "after_revision": "Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), which main component (the heat-shock proteins) is known to directly prevent the intended apoptosis of cancer cells. Moreover, cancer cells can have an already partially activated HSR, thereby hyperthermia may be more toxic to them relative to normal cells. On the other hand , HSR triggers thermotolerance, i.e. the hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses a question about the efficacy and about an optimal strategy of the therapy combined with hyperthermia treatment . We adapt our previous HSR model and propose its stochastic extension, which we then analyse using the approximate probabilistic model checking (APMC) technique. We formalise the notion of the thermotolerance and estimate the intensity and the duration of the HSR-induced thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a proteasome inhibitor, and we propose an optimal strategy for combining these two modalities . By mechanistic modelling of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy. Moreover, our results demonstrate feasibility and practical potential of APMC in analysis of stochastic models of signalling pathways.", "edit_actions": [{"type": "D", "before": "(i.e. raising the temperature of a region of the body affected by cancer)", "after": null, "start_char_pos": 95, "end_char_pos": 168}, {"type": "R", "before": "heat shock", "after": "heat-shock", "start_char_pos": 286, "end_char_pos": 296}, {"type": "R", "before": "heat shock", "after": "heat-shock", "start_char_pos": 349, "end_char_pos": 359}, {"type": "R", "before": "However", "after": "On the other hand", "start_char_pos": 580, "end_char_pos": 587}, {"type": "R", "before": "For that reason, the application of the combined hyperthermia therapy should be carefully examined", "after": "This poses a question about the efficacy and about an optimal strategy of the therapy combined with hyperthermia treatment", "start_char_pos": 736, "end_char_pos": 834}, {"type": "R", "before": "analyze", "after": "analyse", "start_char_pos": 921, "end_char_pos": 928}, {"type": "R", "before": "formalize", "after": "formalise", "start_char_pos": 1001, "end_char_pos": 1010}, {"type": "R", "before": "compute the size", "after": "estimate the intensity", "start_char_pos": 1049, "end_char_pos": 1065}, {"type": "R", "before": "We", "after": "Finally, we", "start_char_pos": 1119, "end_char_pos": 1121}, {"type": "R", "before": "combined therapy of", "after": "multimodal therapy based on", "start_char_pos": 1147, "end_char_pos": 1166}, {"type": "R", "before": "inhibition of proteins refolding", "after": "effect of bortezomib, a proteasome inhibitor, and we propose an optimal strategy for combining these two modalities", "start_char_pos": 1196, "end_char_pos": 1228}, {"type": "R", "before": "modeling", "after": "modelling", "start_char_pos": 1246, "end_char_pos": 1254}, {"type": "R", "before": "signaling", "after": "signalling", "start_char_pos": 1497, "end_char_pos": 1506}], "sents_char_pos": [0, 260, 438, 579, 735, 836, 997, 1118, 1230, 1382]} {"doc_id": "1211.4473", "revision_depth": "1", "before_revision": "Microgrids represent an emerging paradigm of future electric power systems that integrate both distributed and centralized generation . Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for the microgrids. Traditional generation scheduling paradigms assuming perfect prediction of future electricity supply and demand are no longer applicable to microgrids with unpredictable renewable energy supply and co-generation (that depends on both electricity and heat demand). In this paper, we study online algorithms for the micro-grid generation scheduling problem with intermittent renewable energy sources and co-generation, in order to maximize the cost-savings with local generation. Based on insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under certain settings, we show that CHASE achieves the best competitive ratio of all deterministic online algorithms . We also extend our algorithms to intelligently leverage on limited prediction of the future, such as near-term demand or wind forecast. By extensive empirical evaluation using real-world traces, we show that our proposed algorithms can achieve near-offline-optimal performance. In a representative scenario, CHASE leads to around 20\\% cost savings with no future look-ahead at all, and the cost-savings further increase with limited future look-ahead.", "after_revision": "Microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations . Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids. Traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand . They are no longer applicable to microgrids with unpredictable renewable energy supply and with co-generation (that needs to consider both electricity and heat demand). In this paper, we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co-generation, with the goal of maximizing the cost-savings with local generation. Based on the insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under typical settings, we show that CHASE achieves the best competitive ratio among all deterministic online algorithms , and the ratio is no larger than a small constant 3.", "edit_actions": [{"type": "R", "before": "integrate", "after": "can utilize", "start_char_pos": 80, "end_char_pos": 89}, {"type": "R", "before": "generation", "after": "generations", "start_char_pos": 123, "end_char_pos": 133}, {"type": "D", "before": "the", "after": null, "start_char_pos": 427, "end_char_pos": 430}, {"type": "R", "before": "assuming", "after": "rely on", "start_char_pos": 487, "end_char_pos": 495}, {"type": "A", "before": null, "after": ". They", "start_char_pos": 555, "end_char_pos": 555}, {"type": "A", "before": null, "after": "with", "start_char_pos": 642, "end_char_pos": 642}, {"type": "R", "before": "depends on", "after": "needs to consider", "start_char_pos": 663, "end_char_pos": 673}, {"type": "R", "before": "micro-grid", "after": "microgrid", "start_char_pos": 759, "end_char_pos": 769}, {"type": "R", "before": "in order to maximize the", "after": "with the goal of maximizing the", "start_char_pos": 862, "end_char_pos": 886}, {"type": "A", "before": null, "after": "the", "start_char_pos": 932, "end_char_pos": 932}, {"type": "R", "before": "certain", "after": "typical", "start_char_pos": 1187, "end_char_pos": 1194}, {"type": "R", "before": "of", "after": "among", "start_char_pos": 1260, "end_char_pos": 1262}, {"type": "R", "before": ". We also extend our algorithms to intelligently leverage on limited prediction of the future, such as near-term demand or wind forecast. By extensive empirical evaluation using real-world traces, we show that our proposed algorithms can achieve near-offline-optimal performance. In a representative scenario, CHASE leads to around 20\\% cost savings with no future look-ahead at all, and the cost-savings further increase with limited future look-ahead.", "after": ", and the ratio is no larger than a small constant 3.", "start_char_pos": 1299, "end_char_pos": 1752}], "sents_char_pos": [0, 135, 316, 442, 708, 922, 1180, 1300, 1436, 1578]} {"doc_id": "1301.3100", "revision_depth": "3", "before_revision": "We consider the optimal problem \\sup:=\\sup} _{\\tau\\inT _{\\eps,T } \\mathbb{E[i=1^n (\\tau-\\eps^i)^+^i], where } B_{(\\tau-\\eps)^+} posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. Here } T>0 is a fixed time horizon, ( \\phi_t^i )_{0\\leq t\\leq T} is progressively measurable with respect to the Brownian filtration , \\eps ^i \\in[0,T] is a constant, i=1,...o,n, and T_{\\eps,T} is the set of stopping times that lie between a constant \\eps \\in[0,T] and T. We solve this problem by conditioning and then using the theory of reflected backward stochastic differential equations (RBSDEs). As a corollary, we provide the solution to the optimal stopping problem \\sup_{\\tau\\in\\mathcal{T_{0,T}}\\mathbb{E}B_{(\\tau-\\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. We also provide its asymptotic order } \\eps \\to follows. For large enough }\\eps and for small }\\eps as \\eps\\searrow 0. ", "after_revision": "We consider the optimal stopping problem v^{(\\eps):=\\sup} _{\\tau\\inT _{0,T } [i=1^n (\\tau-\\eps^i)^+^i], where } \\mathbb{EB_{(\\tau-\\eps)^+} posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. Here } T>0 is a fixed time horizon, ( B_t )_{0\\leq t\\leq T} is the Brownian motion , \\eps \\in[0,T] is a constant, and T_{\\eps,T} is the set of stopping times taking values in \\eps ,T . The solution of this problem is characterized by a path dependent reflected backward stochastic differential equations _{0,T}}\\mathbb{E}B_{(\\tau-\\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. We also provide its asymptotic order } , from which the continuity of\\eps \\to v^{(\\eps) follows. For large enough }\\eps, we obtain an explicit expression for v^{(\\eps) and for small }\\eps we have lower and upper bounds. The main result of the paper is the asymptotics of v^{(\\eps) as \\eps\\searrow 0. As a byproduct, we also obtain L\\'{e", "edit_actions": [{"type": "R", "before": "problem \\sup", "after": "stopping problem v^{(\\eps)", "start_char_pos": 24, "end_char_pos": 36}, {"type": "R", "before": "_{\\eps,T", "after": "_{0,T", "start_char_pos": 55, "end_char_pos": 63}, {"type": "D", "before": "\\mathbb{E", "after": null, "start_char_pos": 66, "end_char_pos": 75}, {"type": "A", "before": null, "after": "\\mathbb{E", "start_char_pos": 110, "end_char_pos": 110}, {"type": "R", "before": "\\phi_t^i", "after": "B_t", "start_char_pos": 321, "end_char_pos": 329}, {"type": "R", "before": "progressively measurable with respect to the Brownian filtration", "after": "the Brownian motion", "start_char_pos": 351, "end_char_pos": 415}, {"type": "D", "before": "^i", "after": null, "start_char_pos": 423, "end_char_pos": 425}, {"type": "D", "before": "i=1,...o,n,", "after": null, "start_char_pos": 450, "end_char_pos": 461}, {"type": "R", "before": "that lie between a constant", "after": "taking values in", "start_char_pos": 506, "end_char_pos": 533}, {"type": "R", "before": "\\in[0,T] and T. We solve this problem by conditioning and then using the theory of", "after": ",T", "start_char_pos": 539, "end_char_pos": 621}, {"type": "A", "before": null, "after": ". The solution of this problem is characterized by a path dependent", "start_char_pos": 622, "end_char_pos": 622}, {"type": "D", "before": "(RBSDEs). As a corollary, we provide the solution to the optimal stopping problem \\sup_{\\tau\\in\\mathcal{T", "after": null, "start_char_pos": 676, "end_char_pos": 781}, {"type": "A", "before": null, "after": ", from which the continuity of", "start_char_pos": 1019, "end_char_pos": 1019}, {"type": "A", "before": null, "after": "v^{(\\eps)", "start_char_pos": 1028, "end_char_pos": 1028}, {"type": "A", "before": null, "after": ", we obtain an explicit expression for v^{(\\eps)", "start_char_pos": 1060, "end_char_pos": 1060}, {"type": "A", "before": null, "after": "we have lower and upper bounds. The main result of the paper is the asymptotics of v^{(\\eps)", "start_char_pos": 1081, "end_char_pos": 1081}, {"type": "A", "before": null, "after": "As a byproduct, we also obtain L\\'{e", "start_char_pos": 1101, "end_char_pos": 1101}], "sents_char_pos": [0, 282, 554, 685, 979, 1037]} {"doc_id": "1302.7075", "revision_depth": "2", "before_revision": "Existing computational methods to predict protein--protein interaction affinity often perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here a new method to interrogate affinity differences resulting from mutations in a host-virus protein--protein interface. Our method is based on extensive non-equilibrium all-atom simulations: We computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1) and estimate affinity using the maximum applied force during a pulling simulation and the area under the force-versus-distance curve. We find that these quantities provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild-type and mutant complexes. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that only one of them is critical for the bindinginteraction . Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our method provides an elegant framework to compare the effects of multiple mutations, individually and jointly, on protein--protein interactions.", "after_revision": "In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein-protein interactions. However, many commonly used methods to predict these effects perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here an existing method applied in a novel way to a new test case; we interrogate affinity differences resulting from mutations in a host-virus protein-protein interface. We use steered molecular dynamics (SMD) to computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1) . We then approximate affinity using the maximum applied force of separation and the area under the force-versus-distance curve. We find , even without the rigor and planning required for free energy calculations, that these quantities can provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild type and mutant complexes. Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that one of them may not be important for tight binding . Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our approach provides a framework to compare the effects of multiple mutations, individually and jointly, on protein-protein interactions.", "edit_actions": [{"type": "R", "before": "Existing computational methods to predict protein--protein interaction affinity often", "after": "In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein-protein interactions. However, many commonly used methods to predict these effects", "start_char_pos": 0, "end_char_pos": 85}, {"type": "R", "before": "a new method to", "after": "an existing method applied in a novel way to a new test case; we", "start_char_pos": 299, "end_char_pos": 314}, {"type": "R", "before": "protein--protein interface. Our method is based on extensive non-equilibrium all-atom simulations: We", "after": "protein-protein interface. We use steered molecular dynamics (SMD) to", "start_char_pos": 389, "end_char_pos": 490}, {"type": "R", "before": "and estimate", "after": ". We then approximate", "start_char_pos": 611, "end_char_pos": 623}, {"type": "R", "before": "during a pulling simulation", "after": "of separation", "start_char_pos": 665, "end_char_pos": 692}, {"type": "A", "before": null, "after": ", even without the rigor and planning required for free energy calculations,", "start_char_pos": 753, "end_char_pos": 753}, {"type": "A", "before": null, "after": "can", "start_char_pos": 776, "end_char_pos": 776}, {"type": "R", "before": "wild-type", "after": "wild type", "start_char_pos": 915, "end_char_pos": 924}, {"type": "A", "before": null, "after": "Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation.", "start_char_pos": 947, "end_char_pos": 947}, {"type": "D", "before": "only", "after": null, "start_char_pos": 1097, "end_char_pos": 1101}, {"type": "R", "before": "is critical for the bindinginteraction", "after": "may not be important for tight binding", "start_char_pos": 1114, "end_char_pos": 1152}, {"type": "R", "before": "method provides an elegant", "after": "approach provides a", "start_char_pos": 1315, "end_char_pos": 1341}, {"type": "R", "before": "protein--protein", "after": "protein-protein", "start_char_pos": 1427, "end_char_pos": 1443}], "sents_char_pos": [0, 125, 282, 416, 744, 842, 946, 1154, 1301]} {"doc_id": "1304.7664", "revision_depth": "2", "before_revision": "Algorithms with low computational intensity show interesting performance and power consumption behavior on multicore processors. We choose the lattice-Boltzmann method (LBM) as a prototype for this scenario in order to show if and how single-chip performance and power characteristics can be generalized to the highly parallel case. LBM is an algorithm for CFD simulations that has gained popularity due to its ease of implementation and suitability for complex geometries. In this paper we perform a thorough analysis of a sparse-lattice LBM implementation on the Intel Sandy Bridge processor. Starting from a single-core performance model we can describe the intra-chip saturation characteristics of the code and its optimal operating point in terms of energy to solution as a function of the propagation method, the clock frequency, and the SIMD vectorization. We then show how these findings may be extrapolated to the massively parallel level on a petascale-class machine, and quantify the energy-saving potential of various optimizations. We find that high single-core performance and a correct choice of the number of cores used on the chip are the essential factors for lowest energy to solution with minimal loss of performance . In the highly parallel case, these guidelines are found to be even more important for fixing the optimal performance-energy operating point , especially when taking the system's baseline power consumption and the MPI communication characteristics into account. Simplistic measures often applied by users and computing centers, such as setting a low clock speed for memory-bound applications, have limited impact .", "after_revision": "Memory-bound algorithms show complex performance and energy consumption behavior on multicore processors. We choose the lattice-Boltzmann method (LBM) on an Intel Sandy Bridge cluster as a prototype scenario to investigate if and how single-chip performance and power characteristics can be generalized to the highly parallel case. First we perform an analysis of a sparse-lattice LBM implementation for complex geometries. Using a single-core performance model , we predict the intra-chip saturation characteristics and the optimal operating point in terms of energy to solution as a function of implementation details, clock frequency, vectorization, and number of active cores per chip. We show that high single-core performance and a correct choice of the number of active cores per chip are the essential optimizations for lowest energy to solution at minimal performance degradation. Then we extrapolate to the MPI-parallel level and quantify the energy-saving potential of various optimizations and execution modes, where we find these guidelines to be even more important , especially when communication overhead is non-negligible. In our setup we could achieve energy savings of 35\\% in this case, compared to a naive approach. We also demonstrate that a simple non-reflective reduction of the clock speed leaves most of the energy saving potential unused .", "edit_actions": [{"type": "R", "before": "Algorithms with low computational intensity show interesting performance and power", "after": "Memory-bound algorithms show complex performance and energy", "start_char_pos": 0, "end_char_pos": 82}, {"type": "A", "before": null, "after": "on an Intel Sandy Bridge cluster", "start_char_pos": 174, "end_char_pos": 174}, {"type": "R", "before": "for this scenario in order to show", "after": "scenario to investigate", "start_char_pos": 190, "end_char_pos": 224}, {"type": "R", "before": "LBM is an algorithm for CFD simulations that has gained popularity due to its ease of implementation and suitability for complex geometries. In this paper we perform a thorough", "after": "First we perform an", "start_char_pos": 334, "end_char_pos": 510}, {"type": "R", "before": "on the Intel Sandy Bridge processor. Starting from", "after": "for complex geometries. Using", "start_char_pos": 559, "end_char_pos": 609}, {"type": "R", "before": "we can describe", "after": ", we predict", "start_char_pos": 642, "end_char_pos": 657}, {"type": "R", "before": "of the code and its", "after": "and the", "start_char_pos": 700, "end_char_pos": 719}, {"type": "R", "before": "the propagation method, the", "after": "implementation details,", "start_char_pos": 792, "end_char_pos": 819}, {"type": "R", "before": "and the SIMD vectorization. We then show how these findings may be extrapolated to the massively parallel level on a petascale-class machine, and quantify the energy-saving potential of various optimizations. We find", "after": "vectorization, and number of active cores per chip. We show", "start_char_pos": 837, "end_char_pos": 1053}, {"type": "R", "before": "cores used on the", "after": "active cores per", "start_char_pos": 1126, "end_char_pos": 1143}, {"type": "R", "before": "factors", "after": "optimizations", "start_char_pos": 1167, "end_char_pos": 1174}, {"type": "R", "before": "with minimal loss of performance . In the highly parallel case, these guidelines are found", "after": "at minimal performance degradation. Then we extrapolate to the MPI-parallel level and quantify the energy-saving potential of various optimizations and execution modes, where we find these guidelines", "start_char_pos": 1205, "end_char_pos": 1295}, {"type": "D", "before": "for fixing the optimal performance-energy operating point", "after": null, "start_char_pos": 1322, "end_char_pos": 1379}, {"type": "R", "before": "taking the system's baseline power consumption and the MPI communication characteristics into account. Simplistic measures often applied by users and computing centers, such as setting a low clock speed for memory-bound applications, have limited impact", "after": "communication overhead is non-negligible. In our setup we could achieve energy savings of 35\\% in this case, compared to a naive approach. We also demonstrate that a simple non-reflective reduction of the clock speed leaves most of the energy saving potential unused", "start_char_pos": 1398, "end_char_pos": 1651}], "sents_char_pos": [0, 128, 333, 474, 595, 864, 1045, 1239, 1500]} {"doc_id": "1307.0444", "revision_depth": "1", "before_revision": "An on-going debate in the energy economics and power market community has raised the question if energy-only power markets are increasingly failing due to growing in-feed shares from subsidized renewable energy sources (RES) . The short answer to this is: No , they are not failing ! Energy-based power markets are, however, facing several market distortions, namely from the gap between the electricity volume traded at spot markets versus the overall electricity consumption as well as the (wrong) regulatory assumption that variable RES generation, i.e., wind and PV, have zero marginal operation costs. This paper shows that both effects overamplify the well-known merit-order effect of RES power in-feed beyond a level that can still be explained by the underlying physical realities . In this paper we analyze the current situation of wind and photovoltaic (PV ) power in-feed in the German electric power system and their effect on the spot market \\approx20 . We show a comparison of the FIT-subsidized renewable energy sources (RES ) energy production volume to the spot market volume and the overall load demand. Furthermore, a spot market analysis based on the assumption that renewable energy sources (RES ) units have to feed-in with their assumed true marginal costs, i.e., operation and maintenance costs, is performed. Our combined analysis results show that, if the necessary regulatory adaptations are taken, i.e., significantly increasing the spot market's share of overall load demand and using true marginal costs of RES units in the merit-order, energy-based power markets can remain functional despite very high RES power in-feed.", "after_revision": "An on-going debate in the energy economics and power market community has raised the question if energy-only power markets are increasingly failing due to growing in-feed shares from subsidized RES . The short answer to this is: no , they are not failing . Energy-based power markets are, however, facing several market distortions, namely from the gap between the electricity volume traded at spot markets versus the overall electricity consumption as well as the (wrong) regulatory assumption that variable RES generation, i.e., wind and PV, truly have zero marginal operation costs. We show that both effects overamplify the well-known merit-order effect of RES power in-feed beyond a level that is explainable by underlying physical realities , i.e., thermal power plants being willing to accept negative electricity prices to be able to stay online due to considerations of wear tear and start-stop constraints. In this paper we analyze the impacts of wind and PV power in-feed on the spot market for a region that is already today experiencing significant FIT-subsidized RES power in-feed (\\approx20\\%), the German-Austrian market zone of the EPEX . We show a comparison of the FIT-subsidized RES energy production volume to the spot market volume and the overall load demand. Furthermore, a spot market analysis based on the assumption that RES units have to feed-in with their assumed true marginal costs, i.e., operation , maintenance and balancing costs, is performed. Our analysis results show that, if the necessary regulatory adaptations are taken, i.e., increasing the spot market's share of overall load demand and using the true marginal costs of RES units in the merit-order, energy-based power markets can remain functional despite high RES power in-feed.", "edit_actions": [{"type": "R", "before": "renewable energy sources (RES)", "after": "RES", "start_char_pos": 194, "end_char_pos": 224}, {"type": "R", "before": "No", "after": "no", "start_char_pos": 256, "end_char_pos": 258}, {"type": "R", "before": "!", "after": ".", "start_char_pos": 282, "end_char_pos": 283}, {"type": "A", "before": null, "after": "truly", "start_char_pos": 571, "end_char_pos": 571}, {"type": "R", "before": "This paper shows", "after": "We show", "start_char_pos": 608, "end_char_pos": 624}, {"type": "R", "before": "can still be explained by the", "after": "is explainable by", "start_char_pos": 730, "end_char_pos": 759}, {"type": "R", "before": ".", "after": ", i.e., thermal power plants being willing to accept negative electricity prices to be able to stay online due to considerations of wear", "start_char_pos": 790, "end_char_pos": 791}, {"type": "A", "before": null, "after": "tear and start-stop constraints.", "start_char_pos": 792, "end_char_pos": 792}, {"type": "R", "before": "current situation", "after": "impacts", "start_char_pos": 822, "end_char_pos": 839}, {"type": "R", "before": "photovoltaic (PV )", "after": "PV", "start_char_pos": 852, "end_char_pos": 870}, {"type": "D", "before": "in the German electric power system and their effect", "after": null, "start_char_pos": 885, "end_char_pos": 937}, {"type": "A", "before": null, "after": "for a region that is already today experiencing significant FIT-subsidized RES power in-feed (", "start_char_pos": 957, "end_char_pos": 957}, {"type": "A", "before": null, "after": "\\%), the German-Austrian market zone of the EPEX", "start_char_pos": 966, "end_char_pos": 966}, {"type": "R", "before": "renewable energy sources (RES )", "after": "RES", "start_char_pos": 1012, "end_char_pos": 1043}, {"type": "R", "before": "renewable energy sources (RES )", "after": "RES", "start_char_pos": 1189, "end_char_pos": 1220}, {"type": "R", "before": "and maintenance", "after": ", maintenance and balancing", "start_char_pos": 1299, "end_char_pos": 1314}, {"type": "D", "before": "combined", "after": null, "start_char_pos": 1340, "end_char_pos": 1348}, {"type": "D", "before": "significantly", "after": null, "start_char_pos": 1434, "end_char_pos": 1447}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1516, "end_char_pos": 1516}, {"type": "D", "before": "very", "after": null, "start_char_pos": 1627, "end_char_pos": 1631}], "sents_char_pos": [0, 255, 607, 791, 968, 1123, 1335]} {"doc_id": "1307.5319", "revision_depth": "2", "before_revision": "The aim of this work is to explore the possible types of phenomena that simple macroeconomic Agent-Based models (ABM) can reproduce. Our motivation is to understand the large macro-economic fluctuations observed in the \"Mark I\" ABM. Our central finding is the generic existence of a phase transition between a \"good economy\" where unemployment is low, and a \"bad economy\" where unemployment is high. We show that this transition is induced by an asymmetry between the rate of hiring and the rate of firing of the firms. This asymmetry is, in Mark I, due to the fact that as the interest rate increases, firms become more and more reluctant to take further loans and have to reduce their workforce. The unemployment level remains small until a tipping point, beyond which the economy suddenly collapses. If the parameters are such that the system is close to this transition, any small fluctuation is amplified as the system jumps between the two equilibria. We have explored several natural extensions of the model. One is to introduce a bankruptcy threshold, limiting the leverage of firms. This leads to a rich phase diagram with, in particular, a region where acute endogenous crises occur, during which the unemployment rate shoots up before the economy can recover. We allow the hiring/firing propensity to depend on the financial fragility of firms, and introduce simple wage update policies. This leads to inflation (in the \"good\" phase) or deflation (in the \"bad\" phase), but leaves the overall phase diagram of the model essentially unchanged. We have finally explored the effect of simple monetary policies that attempt to contain rising unemployment and defang crises. We end the paper with general comments on the usefulness of ABMs to model macroeconomic phenomena, in particular in view of the time needed to reach a steady state that raises the issue of ergodicity in these models.", "after_revision": "The aim of this work is to explore the possible types of phenomena that simple macroeconomic Agent-Based models (ABM) can reproduce. We propose a methodology, inspired by statistical physics, that characterizes a model through its 'phase diagram' in the space of parameters. Our first motivation is to understand the large macro-economic fluctuations observed in the 'Mark I' ABM. Our major finding is the generic existence of a phase transition between a 'good economy' where unemployment is low, and a 'bad economy' where unemployment is high. We introduce a simpler framework that allows us to show that this transition is robust against many modifications of the model, and is generically induced by an asymmetry between the rate of hiring and the rate of firing of the firms. The unemployment level remains small until a tipping point, beyond which the economy suddenly collapses. If the parameters are such that the system is close to this transition, any small fluctuation is amplified as the system jumps between the two equilibria. We have explored several natural extensions of the model. One is to introduce a bankruptcy threshold, limiting the leverage of firms. This leads to a rich phase diagram with, in particular, a region where acute endogenous crises occur, during which the unemployment rate shoots up before the economy can recover. We also introduce simple wage policies. This leads to inflation (in the 'good' phase) or deflation (in the 'bad' phase), but leaves the overall phase diagram of the model essentially unchanged. We have also started exploring the effect of simple monetary policies that attempt to contain rising unemployment and defang crises. We end the paper with general comments on the usefulness of ABMs to model macroeconomic phenomena, in particular in view of the time needed to reach a steady state that raises the issue of ergodicity in these models.", "edit_actions": [{"type": "R", "before": "Our", "after": "We propose a methodology, inspired by statistical physics, that characterizes a model through its 'phase diagram' in the space of parameters. Our first", "start_char_pos": 133, "end_char_pos": 136}, {"type": "R", "before": "\"Mark I\"", "after": "'Mark I'", "start_char_pos": 219, "end_char_pos": 227}, {"type": "R", "before": "central", "after": "major", "start_char_pos": 237, "end_char_pos": 244}, {"type": "R", "before": "\"good economy\"", "after": "'good economy'", "start_char_pos": 310, "end_char_pos": 324}, {"type": "R", "before": "\"bad economy\"", "after": "'bad economy'", "start_char_pos": 358, "end_char_pos": 371}, {"type": "A", "before": null, "after": "introduce a simpler framework that allows us to", "start_char_pos": 403, "end_char_pos": 403}, {"type": "A", "before": null, "after": "robust against many modifications of the model, and is generically", "start_char_pos": 433, "end_char_pos": 433}, {"type": "D", "before": "This asymmetry is, in Mark I, due to the fact that as the interest rate increases, firms become more and more reluctant to take further loans and have to reduce their workforce.", "after": null, "start_char_pos": 522, "end_char_pos": 699}, {"type": "R", "before": "allow the hiring/firing propensity to depend on the financial fragility of firms, and", "after": "also", "start_char_pos": 1276, "end_char_pos": 1361}, {"type": "D", "before": "update", "after": null, "start_char_pos": 1384, "end_char_pos": 1390}, {"type": "R", "before": "\"good\"", "after": "'good'", "start_char_pos": 1433, "end_char_pos": 1439}, {"type": "R", "before": "\"bad\"", "after": "'bad'", "start_char_pos": 1468, "end_char_pos": 1473}, {"type": "R", "before": "finally explored", "after": "also started exploring", "start_char_pos": 1563, "end_char_pos": 1579}], "sents_char_pos": [0, 132, 232, 399, 521, 699, 804, 959, 1017, 1093, 1272, 1400, 1554, 1681]} {"doc_id": "1309.5209", "revision_depth": "2", "before_revision": "We unravel how functional plasticity and redundancy are essential mechanisms underlying the ability to survive of metabolic networks. For that, we perform an exhaustive computational screening of synthetic lethal reaction pairs in Escherichia coli in minimal medium and find that synthetic lethals divide in two different groups depending on whether the synthetic lethal interaction works as a back up or as a parallel use mechanism, the first corresponding to essential plasticity and the second to essential redundancy. In E. coli, the analysis of how pathways are entangled through essential redundancy supports the view that synthetic lethality affects preferentially a single function or pathway. In contrast, essential plasticity, the dominant class, tends to be inter-pathway but concentrated and unveils Cell Envelope Biosynthesis as an essential backup to Membrane Lipid Metabolism. When comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic networks of the URLanisms exhibit opposite relationships between the relative importance of plasticity and redundancy , consistent with the conjecture that plasticity is a more sophisticated mechanism that requires a more URLanization. Finally, coessential reaction pairs are explored in different environmental conditions to uncover the interplay between the two mechanisms. We find that synthetic lethal interactions and their classification in plasticity and redundancy are basically insensitive to minimal medium composition, and are highly conserved even when the environment is enriched with nonessential compounds or overconstrained to decrease maximum biomass formation.", "after_revision": "We unravel how functional plasticity and redundancy are essential mechanisms underlying the ability to survive of metabolic networks. We perform an exhaustive computational screening of synthetic lethal reaction pairs in Escherichia coli in a minimal medium and we find that synthetic lethal pairs divide in two different groups depending on whether the synthetic lethal interaction works as a backup or as a parallel use mechanism, the first corresponding to essential plasticity and the second to essential redundancy. In E. coli, the analysis of pathways entanglement through essential redundancy supports the view that synthetic lethality affects preferentially a single function or pathway. In contrast, essential plasticity, the dominant class, tends to be inter-pathway but strongly localized and unveils Cell Envelope Biosynthesis as an essential backup for Membrane Lipid Metabolism. When comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic networks of the URLanisms exhibit a large difference in the relative importance of plasticity and redundancy which is consistent with the conjecture that plasticity is a sophisticated mechanism that requires a URLanization. Finally, coessential reaction pairs are explored in different environmental conditions to uncover the interplay between the two mechanisms. We find that synthetic lethal interactions and their classification in plasticity and redundancy are basically insensitive to medium composition, and are highly conserved even when the environment is enriched with nonessential compounds or overconstrained to decrease maximum biomass formation.", "edit_actions": [{"type": "R", "before": "For that, we", "after": "We", "start_char_pos": 134, "end_char_pos": 146}, {"type": "A", "before": null, "after": "a", "start_char_pos": 251, "end_char_pos": 251}, {"type": "A", "before": null, "after": "we", "start_char_pos": 271, "end_char_pos": 271}, {"type": "R", "before": "lethals", "after": "lethal pairs", "start_char_pos": 292, "end_char_pos": 299}, {"type": "R", "before": "back up", "after": "backup", "start_char_pos": 396, "end_char_pos": 403}, {"type": "R", "before": "how pathways are entangled", "after": "pathways entanglement", "start_char_pos": 552, "end_char_pos": 578}, {"type": "R", "before": "concentrated", "after": "strongly localized", "start_char_pos": 789, "end_char_pos": 801}, {"type": "R", "before": "to", "after": "for", "start_char_pos": 864, "end_char_pos": 866}, {"type": "R", "before": "opposite relationships between", "after": "a large difference in", "start_char_pos": 1005, "end_char_pos": 1035}, {"type": "R", "before": ",", "after": "which is", "start_char_pos": 1089, "end_char_pos": 1090}, {"type": "D", "before": "more", "after": null, "start_char_pos": 1143, "end_char_pos": 1147}, {"type": "D", "before": "more", "after": null, "start_char_pos": 1188, "end_char_pos": 1192}, {"type": "D", "before": "minimal", "after": null, "start_char_pos": 1473, "end_char_pos": 1480}], "sents_char_pos": [0, 133, 523, 703, 893, 1206, 1346]} {"doc_id": "1309.7474", "revision_depth": "1", "before_revision": "The heat URLanizing protein (Hop) is important in modulating the activity and co-interaction of two chaperones: heat shock protein 90 and 70 ( Hsp90 and Hsp70 ). Recent research suggests that Plasmodium falciparum Hop (PfHop), PfHsp90 and PfHsp70 form a complex in the trophozoite infective stage. However, there has been little computational research on the malarial Hop protein in complex with other malarial Hsps. Using in silico characterization of PfHop , this work showed that individual domains of Hop are evolving at different rates within the protein. Homology modeling of PfHop , and of human Hop (HsHop ) in complex with its own cytosolic Hsp90 and Hsp70 C-terminal peptide partners , indicated high conservation of the Hop concave TPR sites bound to the C-terminal motifs of partner proteins. Further, for the first time, we identified two additional binding sites between Hop and Hsp90 which are distinctly less conserved between human and malaria parasite. These two sites are located on the convex surface of Hop TPR2 and involved in interactions with Hsp90 middle and C-terminal domains. Motif analysis was combined with phylogenetic trees and structural mapping to investigate the evolutionary conservation of important structural and functional sites of Hop. Statistical coupling analysis identified several sectors of coevolving residues in Hop. The convex sites of interaction between Hop TPR2 and Hsp90 middle and C-terminal domains were found to be part of separately coevolving sectors. This provides further evidence that these convex interaction sites are important to complex formation with Hsp90, and are thus potential sites for inhibitor design (in addition to the concave sites which have been the focus of previous efforts) . Further the convex sites are less conserved than the concave sites, which make their potential for malarial inhibitor design extremely attractive .", "after_revision": "The heat URLanizing protein (Hop) is important in modulating the activity and co-interaction of two chaperones: heat shock protein 70 and 90 ( Hsp70 and Hsp90 ). Recent research suggested that Plasmodium falciparum Hop (PfHop), PfHsp70 and PfHsp90 form a complex in the trophozoite infective stage. However, there has been little computational research on the malarial Hop protein in complex with other malarial Hsps. Using in silico characterization of the protein , this work showed that individual domains of Hop are evolving at different rates within the protein. Differences between human Hop (HsHop) and PfHop were identified by motif analysis. Homology modeling of PfHop and HsHop in complex with their own cytosolic Hsp90 and Hsp70 C-terminal peptide partners indicated excellent conservation of the Hop concave TPR sites bound to the C-terminal motifs of partner proteins. Further, we analyzed additional binding sites between Hop and Hsp90 , and showed, for the first time, that they are distinctly less conserved between human and malaria parasite. These sites are located on the convex surface of Hop TPR2 , and involved in interactions with the Hsp90 middle domain. Since the convex sites are less conserved than the concave sites, it makes their potential for malarial inhibitor design extremely attractive (as opposed to the concave sites which have been the focus of previous efforts) .", "edit_actions": [{"type": "D", "before": "90 and", "after": null, "start_char_pos": 131, "end_char_pos": 137}, {"type": "A", "before": null, "after": "and 90", "start_char_pos": 141, "end_char_pos": 141}, {"type": "D", "before": "Hsp90 and", "after": null, "start_char_pos": 144, "end_char_pos": 153}, {"type": "A", "before": null, "after": "and Hsp90", "start_char_pos": 160, "end_char_pos": 160}, {"type": "R", "before": "suggests", "after": "suggested", "start_char_pos": 180, "end_char_pos": 188}, {"type": "D", "before": "PfHsp90 and", "after": null, "start_char_pos": 229, "end_char_pos": 240}, {"type": "A", "before": null, "after": "and PfHsp90", "start_char_pos": 249, "end_char_pos": 249}, {"type": "R", "before": "PfHop", "after": "the protein", "start_char_pos": 456, "end_char_pos": 461}, {"type": "A", "before": null, "after": "Differences between human Hop (HsHop) and PfHop were identified by motif analysis.", "start_char_pos": 564, "end_char_pos": 564}, {"type": "R", "before": ", and of human Hop (HsHop )", "after": "and HsHop", "start_char_pos": 592, "end_char_pos": 619}, {"type": "R", "before": "its", "after": "their", "start_char_pos": 636, "end_char_pos": 639}, {"type": "R", "before": ", indicated high", "after": "indicated excellent", "start_char_pos": 698, "end_char_pos": 714}, {"type": "R", "before": "for the first time, we identified two", "after": "we analyzed", "start_char_pos": 818, "end_char_pos": 855}, {"type": "R", "before": "which", "after": ", and showed, for the first time, that they", "start_char_pos": 903, "end_char_pos": 908}, {"type": "D", "before": "two", "after": null, "start_char_pos": 981, "end_char_pos": 984}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1037, "end_char_pos": 1037}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1072, "end_char_pos": 1072}, {"type": "R", "before": "and C-terminal domains. Motif analysis was combined with phylogenetic trees and structural mapping to investigate the evolutionary conservation of important structural and functional sites of Hop. Statistical coupling analysis identified several sectors of coevolving residues in Hop. The convex sites of interaction between Hop TPR2 and Hsp90 middle and C-terminal domains were found to be part of separately coevolving sectors. This provides further evidence that these convex interaction sites are important to complex formation with Hsp90, and are thus potential sites for inhibitor design (in addition", "after": "domain. Since the convex sites are less conserved than the concave sites, it makes their potential for malarial inhibitor design extremely attractive (as opposed", "start_char_pos": 1086, "end_char_pos": 1692}, {"type": "D", "before": ". Further the convex sites are less conserved than the concave sites, which make their potential for malarial inhibitor design extremely attractive", "after": null, "start_char_pos": 1761, "end_char_pos": 1908}], "sents_char_pos": [0, 163, 300, 419, 563, 808, 974, 1109, 1282, 1370, 1515, 1762]} {"doc_id": "1310.7717", "revision_depth": "1", "before_revision": "Self-sustainability for energy scavenging networks is a crucial step in modern sensor network developments. Most existing analyses are however tailored to a single transmitting node or are difficult to map into practical protocol designs. Here, we offer a comprehensive framework for self-sufficient sensor networks powered by renewable energy sources. To this extent, we decompose the problem in two nested optimization steps: in the inner one we characterize the optimal operating point of the network subject to a required energy consumption figure, while in the outer step , we provide optimal energy management policies to make the system self-sufficient , given the the statistical description of the energy source . Our framework permits to gauge the impact of key sensor network parameters, such as the battery size , the harvester size (e.g., solar panel), the information transmission rate and the nodes' duty cycle. In addition, the closed form solution of the inner optimization problem lends itself to the implementation, as an online algorithm, on computationally constrained embedded devices. Finally, we provide results describing the consequences of most of the relevant design choices in terms of network behavior, maximum achievable throughput and dynamics associated with the optimal energy management policies .", "after_revision": "Self-sustainability is a crucial step for modern sensor networks. Here, we offer an original and comprehensive framework for autonomous sensor networks powered by renewable energy sources. We decompose our design into two nested optimization steps: the inner step characterizes the optimal network operating point subject to an average energy consumption constraint, while the outer step provides online energy management policies making the system energetically self-sufficient in the presence of unpredictable and intermittent energy sources . Our framework sheds new light into the design of pragmatic schemes for the control of energy harvesting sensor networks and permits to gauge the impact of key sensor network parameters, such as the battery capacity , the harvester size , the information transmission rate and the radio duty cycle. We analyze the robustness of the obtained energy management policies in the cases where the nodes have differing energy inflow statistics and where topology changes may occur, devising effective heuristics. Our energy management policies are finally evaluated considering real solar radiation traces, validating them against state of the art solutions and describing the impact of relevant design choices in terms of achievable network throughput and battery level dynamics .", "edit_actions": [{"type": "D", "before": "for energy scavenging networks", "after": null, "start_char_pos": 20, "end_char_pos": 50}, {"type": "R", "before": "in modern sensor network developments. Most existing analyses are however tailored to a single transmitting node or are difficult to map into practical protocol designs.", "after": "for modern sensor networks.", "start_char_pos": 69, "end_char_pos": 238}, {"type": "R", "before": "a", "after": "an original and", "start_char_pos": 254, "end_char_pos": 255}, {"type": "R", "before": "self-sufficient", "after": "autonomous", "start_char_pos": 284, "end_char_pos": 299}, {"type": "R", "before": "To this extent, we decompose the problem in", "after": "We decompose our design into", "start_char_pos": 353, "end_char_pos": 396}, {"type": "D", "before": "in the inner one we characterize the optimal operating point of the network subject to a required energy consumption figure, while in", "after": null, "start_char_pos": 428, "end_char_pos": 561}, {"type": "A", "before": null, "after": "inner step characterizes the optimal network operating point subject to an average energy consumption constraint, while the", "start_char_pos": 566, "end_char_pos": 566}, {"type": "R", "before": ", we provide optimal", "after": "provides online", "start_char_pos": 578, "end_char_pos": 598}, {"type": "R", "before": "to make the system", "after": "making the system energetically", "start_char_pos": 626, "end_char_pos": 644}, {"type": "R", "before": ", given the the statistical description of the energy source", "after": "in the presence of unpredictable and intermittent energy sources", "start_char_pos": 661, "end_char_pos": 721}, {"type": "A", "before": null, "after": "sheds new light into the design of pragmatic schemes for the control of energy harvesting sensor networks", "start_char_pos": 738, "end_char_pos": 738}, {"type": "A", "before": null, "after": "and", "start_char_pos": 739, "end_char_pos": 739}, {"type": "R", "before": "size", "after": "capacity", "start_char_pos": 822, "end_char_pos": 826}, {"type": "R", "before": "(e.g., solar panel),", "after": ",", "start_char_pos": 848, "end_char_pos": 868}, {"type": "R", "before": "nodes'", "after": "radio", "start_char_pos": 911, "end_char_pos": 917}, {"type": "R", "before": "In addition, the closed form solution of the inner optimization problem lends itself to the implementation, as an online algorithm, on computationally constrained embedded devices. Finally, we provide results describing the consequences of most of the", "after": "We analyze the robustness of the obtained energy management policies in the cases where the nodes have differing energy inflow statistics and where topology changes may occur, devising effective heuristics. Our energy management policies are finally evaluated considering real solar radiation traces, validating them against state of the art solutions and describing the impact of", "start_char_pos": 930, "end_char_pos": 1181}, {"type": "R", "before": "network behavior, maximum achievable throughput and dynamics associated with the optimal energy management policies", "after": "achievable network throughput and battery level dynamics", "start_char_pos": 1218, "end_char_pos": 1333}], "sents_char_pos": [0, 107, 238, 352, 723, 929, 1110]} {"doc_id": "1311.2665", "revision_depth": "1", "before_revision": "The inositol trisphosphate receptor (IPR) is a crucial Ca^{2+ channel that regulates the Ca^{2+} influx from the endoplasmic reticulum (ER) to the cytoplasm. A thorough study of this receptor contributes to a better understanding of calcium oscillations and waves. Based on the patch-clamp experimental data obtained from the outer membranes of isolated nuclei of theXenopus oocyte, we construct an allosteric competition model of single IPR channels on their native ER membrane environment. In our model, each IPR channel consists of four subunits, each of which can exist in two configurations. Each subunit in both configurations has one IP_3 binding site, together with one activating and one inhibitory Ca^{2+ the well-known Monod-Wyman-Changeux allosteric model, we construct our model from the subunit level to the channel level . It turns out that our model successfully reproduces the patch-clamp experimental data of the steady-state open probability, the mean close duration, and the bi-exponential distribution of the open duration . Particularly, our model successfully describes the bimodal [Ca^{2+}] dependence of the mean open duration at high [IP_3], a steady-state behavior which fails to be correctly described in previous IPR models , and the adaptation of the IPR channel, an important dynamic behavior which is seldom discussed in previous IPR models. In addition, we find that the gating of the IPR channel is most likely to be a biochemical process that consumes energy.", "after_revision": "The inositol trisphosphate receptor (IPR) is a crucial ion channel that regulates the Ca^{2+} influx from the endoplasmic reticulum (ER) to the cytoplasm. A thorough study of the IPR channel contributes to a better understanding of calcium oscillations and waves. It has long been observed that the IPR channel is a typical biological system which performs adaptation. However, recent advances on the physical essence of adaptation show that adaptation systems with a negative feedback mechanism, such as the IPR channel, must break detailed balance and always operate out of equilibrium with energy dissipation. Almost all previous IPR models are equilibrium models assuming detailed balance and thus violate the physical essence of adaptation. In this article, we constructed a nonequilibrium allosteric model of single IPR channels based on the patch-clamp experimental data obtained from the IPR in the outer membranes of isolated nuclei of theXenopus oocyte . It turns out that our model reproduces the patch-clamp experimental data reasonably well and produces both the correct steady-state and dynamic properties of the channel . Particularly, our model successfully describes the complicated bimodal [Ca^{2+}] dependence of the mean open duration at high [IP_3], a steady-state behavior which fails to be correctly described in previous IPR models . Finally, we used the patch-clamp experimental data to validate that the IPR channel indeed breaks detailed balance and thus is a nonequilibrium system which consumes energy.", "edit_actions": [{"type": "R", "before": "Ca^{2+", "after": "ion", "start_char_pos": 55, "end_char_pos": 61}, {"type": "R", "before": "this receptor", "after": "the IPR channel", "start_char_pos": 178, "end_char_pos": 191}, {"type": "D", "before": "Based on the patch-clamp experimental data obtained from the outer membranes of isolated nuclei of the", "after": null, "start_char_pos": 265, "end_char_pos": 367}, {"type": "D", "before": "Xenopus", "after": null, "start_char_pos": 367, "end_char_pos": 374}, {"type": "R", "before": "oocyte, we construct an allosteric competition", "after": "It has long been observed that the IPR channel is a typical biological system which performs adaptation. However, recent advances on the physical essence of adaptation show that adaptation systems with a negative feedback mechanism, such as the IPR channel, must break detailed balance and always operate out of equilibrium with energy dissipation. Almost all previous IPR models are equilibrium models assuming detailed balance and thus violate the physical essence of adaptation. In this article, we constructed a nonequilibrium allosteric", "start_char_pos": 375, "end_char_pos": 421}, {"type": "R", "before": "on their native ER membrane environment. In our model, each IPR channel consists of four subunits, each of which can exist in two configurations. Each subunit in both configurations has one IP_3 binding site, together with one activating and one inhibitory Ca^{2+", "after": "based on", "start_char_pos": 451, "end_char_pos": 714}, {"type": "R", "before": "well-known Monod-Wyman-Changeux allosteric model, we construct our model from the subunit level to the channel level", "after": "patch-clamp experimental data obtained from the IPR in the outer membranes of isolated nuclei of the", "start_char_pos": 719, "end_char_pos": 835}, {"type": "A", "before": null, "after": "Xenopus", "start_char_pos": 835, "end_char_pos": 835}, {"type": "A", "before": null, "after": "oocyte", "start_char_pos": 836, "end_char_pos": 836}, {"type": "D", "before": "successfully", "after": null, "start_char_pos": 867, "end_char_pos": 879}, {"type": "R", "before": "of the", "after": "reasonably well and produces both the correct", "start_char_pos": 925, "end_char_pos": 931}, {"type": "R", "before": "open probability, the mean close duration, and the bi-exponential distribution of the open duration", "after": "and dynamic properties of the channel", "start_char_pos": 945, "end_char_pos": 1044}, {"type": "A", "before": null, "after": "complicated", "start_char_pos": 1098, "end_char_pos": 1098}, {"type": "R", "before": ", and the adaptation of the IPR channel, an important dynamic behavior which is seldom discussed in previous IPR models. In addition, we find that the gating of the IPR channel is most likely to be a biochemical process that", "after": ". Finally, we used the patch-clamp experimental data to validate that the IPR channel indeed breaks detailed balance and thus is a nonequilibrium system which", "start_char_pos": 1255, "end_char_pos": 1479}], "sents_char_pos": [0, 157, 264, 491, 596, 838, 1116, 1375]} {"doc_id": "1311.5753", "revision_depth": "1", "before_revision": "We fill a void in merging empirical and phenomenological characterisation of the dynamical phase transitions in complex systems by identifying three , essentially different, dynamical phase transitions on real-life financial markets driven by 'macrodynamics' of a superstar-like superhub. We collect and interpret the empirical, numerical, and semi-analytical evidences for the existence of these phase transtions , by considering the Frankfurt Stock Exchange (FSE), as a typical example of a financial market of a medium size. Using the canonical object for the graph theory, i.e. the Minimal Spanning Tree (MST) network, we observe: (i) The initial phase transition from the equilibrium to non-equilibrium MST network in its nucleation phase, occuring at some critical time 2005-08-11 . Coalescence of edges on the FSE's transient leader SALZGITTER (SZG) AG-Stahl und Technologie company, is observed within the nucleation and is approximately characterised by the Lifsthiz-Slyozov growth exponent; (ii) The nucleation accelerates and transforms to the condensation process, in the second phase transition, forming a logarithmically diverging lambda-peak of short-range order parameters at the subsequent critical time 2007-01-25 ; (iii) In the third phase transition, the peak logarithmically decreases over three quarters of the year, resulting in a few loosly connected sub-graphs. This peak , is reminiscent of a non-equilibrium superstar-like superhub or a `dragon king' effect, abruptly accelerating the evolution of the SZG company. All these phase transitions are caused by the few richest vertices, which drift towards the SZG centre, providing most of the edges . Thus, we capture an amazing phenomenon, likely of a more universal character, where a periferial vertex becomes the one which is overdominating the complex network during an exceptionally long period of time.", "after_revision": "We fill a void in merging empirical and phenomenological characterisation of the dynamical phase transitions in complex systems by identifying three of them on real-life financial markets . We extract and interpret the empirical, numerical, and semi-analytical evidences for the existence of these phase transitions , by considering the Frankfurt Stock Exchange (FSE), as a typical example of a financial market of a medium size. Using the canonical object for the graph theory, i.e. the Minimal Spanning Tree (MST) network, we observe: (i) The initial phase transition from the equilibrium to non-equilibrium MST network in its nucleation phase, occurring at some critical time . Coalescence of edges on the FSE's transient leader is observed within the nucleation and is approximately characterized by the Lifsthiz-Slyozov growth exponent; (ii) The nucleation accelerates and transforms to the condensation process, in the second phase transition, forming a logarithmically diverging lambda-peak of short-range order parameters at the subsequent critical time - an analogon of such a transition in superfluidity ; (iii) In the third phase transition, the peak logarithmically decreases over three quarters of the year, resulting in a few loosely connected sub-graphs. This peak is reminiscent of a non-equilibrium superstar-like superhub or a `dragon king' effect, abruptly accelerating the evolution of the leader company. All these phase transitions are caused by the few richest vertices, which drift towards the leader and provide the most of the edges increasing the leader's degree . Thus, we capture an amazing phenomenon, likely of a more universal character, where a peripheral vertex becomes the one which is over dominating the complex network during an exceptionally long period of time.", "edit_actions": [{"type": "R", "before": ", essentially different, dynamical phase transitions", "after": "of them", "start_char_pos": 149, "end_char_pos": 201}, {"type": "R", "before": "driven by 'macrodynamics' of a superstar-like superhub. We collect", "after": ". We extract", "start_char_pos": 233, "end_char_pos": 299}, {"type": "R", "before": "transtions", "after": "transitions", "start_char_pos": 403, "end_char_pos": 413}, {"type": "R", "before": "occuring", "after": "occurring", "start_char_pos": 745, "end_char_pos": 753}, {"type": "D", "before": "2005-08-11", "after": null, "start_char_pos": 776, "end_char_pos": 786}, {"type": "D", "before": "SALZGITTER (SZG) AG-Stahl und Technologie company,", "after": null, "start_char_pos": 840, "end_char_pos": 890}, {"type": "R", "before": "characterised", "after": "characterized", "start_char_pos": 946, "end_char_pos": 959}, {"type": "R", "before": "2007-01-25", "after": "- an analogon of such a transition in superfluidity", "start_char_pos": 1221, "end_char_pos": 1231}, {"type": "R", "before": "loosly", "after": "loosely", "start_char_pos": 1358, "end_char_pos": 1364}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1397, "end_char_pos": 1398}, {"type": "R", "before": "SZG", "after": "leader", "start_char_pos": 1529, "end_char_pos": 1532}, {"type": "R", "before": "SZG centre, providing", "after": "leader and provide the", "start_char_pos": 1634, "end_char_pos": 1655}, {"type": "A", "before": null, "after": "increasing the leader's degree", "start_char_pos": 1674, "end_char_pos": 1674}, {"type": "R", "before": "periferial", "after": "peripheral", "start_char_pos": 1763, "end_char_pos": 1773}, {"type": "R", "before": "overdominating", "after": "over dominating", "start_char_pos": 1806, "end_char_pos": 1820}], "sents_char_pos": [0, 288, 527, 1000, 1233, 1386, 1541, 1676]} {"doc_id": "1312.5565", "revision_depth": "1", "before_revision": "An proof is presented that gene regulatory networks (GRNs) based solely on protein transcription factors (TFs) cannot control the development of complex multicellular life. Hence, GRNs cannot explain the evolution of multicellular life in the Cambrian Explosion. Networks are based on an addressing system which is used to construct network links. The links utilize the addresses to connect two or more linked nodes in the network. The more complex the network the greater the number of links and the larger the required address space. Addresses are formed by combinations of basic units. It has been assumed that combinations of protein transcription factors (TFs) generate a large enough address space to form gene regulatory networks that are complex enough to control the development of complex multicellular life. We prove that TFs do not have sufficient combinatorial power to serve as the basis of an addressing system for regulatory control of genomes in the development of URLanisms. More generally, there are inherent bounds on the complexity of control networks based solely on transcription factors. I show that given n transcription factor genes in a genome and address combinations of length k then there are at most n/k k-length TF-addresses in the address space. The complexity of embryonic development requires a corresponding complexity of control information in the cell and its genome. Therefore, different addressing system must exist to form the complex control networks required for complex control systems. These new developmental control networks are called CENES (for Control genes). The evolution of CENEs made the Cambrian Explosion possible. Their code is universal (modulo syntax) in the genomes of all multicellular life.", "after_revision": "A proof is presented that gene regulatory networks (GRNs) based solely on transcription factors cannot control the development of complex multicellular life. GRNs alone cannot explain the evolution of multicellular life in the Cambrian Explosion. Networks are based on addressing systems which are used to construct network links. The more complex the network the greater the number of links and the larger the required address space. It has been assumed that combinations of transcription factors generate a large enough address space to form GRNs that are complex enough to control the development of complex multicellular life. However, it is shown in this article that transcription factors do not have sufficient combinatorial power to serve as the basis of an addressing system for regulatory control of genomes in the development of URLanisms. It is proven that given n transcription factor genes in a genome and address combinations of length k then there are at most n/k k-length transcription factor addresses in the address space. The complexity of embryonic development requires a corresponding complexity of control information in the cell and its genome. Therefore, a different addressing system must exist to form the complex control networks required for complex control systems. It is postulated that a new type of network evolved based on an RNA-DNA addressing system that utilized and subsumed the extant GRNs. These new developmental control networks are called CENES (for Control genes). The evolution of these new higher networks would explain how the Cambrian Explosion was possible. The architecture of these higher level networks may in fact be universal (modulo syntax) in the genomes of all multicellular life.", "edit_actions": [{"type": "R", "before": "An", "after": "A", "start_char_pos": 0, "end_char_pos": 2}, {"type": "R", "before": "protein transcription factors (TFs)", "after": "transcription factors", "start_char_pos": 75, "end_char_pos": 110}, {"type": "R", "before": "Hence, GRNs", "after": "GRNs alone", "start_char_pos": 173, "end_char_pos": 184}, {"type": "R", "before": "an addressing system which is", "after": "addressing systems which are", "start_char_pos": 285, "end_char_pos": 314}, {"type": "R", "before": "links utilize the addresses to connect two or more linked nodes in the network. The more", "after": "more", "start_char_pos": 352, "end_char_pos": 440}, {"type": "D", "before": "Addresses are formed by combinations of basic units.", "after": null, "start_char_pos": 536, "end_char_pos": 588}, {"type": "R", "before": "protein transcription factors (TFs)", "after": "transcription factors", "start_char_pos": 630, "end_char_pos": 665}, {"type": "R", "before": "gene regulatory networks", "after": "GRNs", "start_char_pos": 712, "end_char_pos": 736}, {"type": "R", "before": "We prove that TFs", "after": "However, it is shown in this article that transcription factors", "start_char_pos": 819, "end_char_pos": 836}, {"type": "R", "before": "More generally, there are inherent bounds on the complexity of control networks based solely on transcription factors. I show", "after": "It is proven", "start_char_pos": 993, "end_char_pos": 1118}, {"type": "R", "before": "TF-addresses", "after": "transcription factor addresses", "start_char_pos": 1244, "end_char_pos": 1256}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1417, "end_char_pos": 1417}, {"type": "A", "before": null, "after": "It is postulated that a new type of network evolved based on an RNA-DNA addressing system that utilized and subsumed the extant GRNs.", "start_char_pos": 1532, "end_char_pos": 1532}, {"type": "R", "before": "CENEs made", "after": "these new higher networks would explain how", "start_char_pos": 1629, "end_char_pos": 1639}, {"type": "R", "before": "possible. Their code is", "after": "was possible. The architecture of these higher level networks may in fact be", "start_char_pos": 1663, "end_char_pos": 1686}], "sents_char_pos": [0, 172, 262, 347, 431, 535, 588, 818, 992, 1111, 1278, 1405, 1531, 1611, 1672]} {"doc_id": "1312.7111", "revision_depth": "1", "before_revision": "The determination of whether a gene is protein coding or not is a key goal of genome annotation projects. Peptide mass spectrometry is a powerful tool for detecting cellular expression of proteins and therefore is an attractive approach for verifying the protein coding potential of genes. However, a range of technical difficulties limit the coverage from proteomics experiments, with the highest recorded coverage of the human proteome being approximately 50\\% of human genes . Here we map the peptides detected in 7 large-scale proteomics studies to the GENCODE v12 annotation of the human genome and identify almost 60\\% of the protein coding genes . We find that there are surprisingly strong correlations between peptide detection and cross-species conservation, gene age and the presence of protein-like features. The age of the gene and its conservation across vertebrate species are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is little or no evidence for protein expression for genes that have appeared since primates or that do not have any protein-like features or conservation. Based on our results we describe a set of 2,001 genes that have no protein features and poor conservation, or have ambiguous annotations in gene or protein databases. We suggest that many of the genes that lack supporting evidence and that are not detected in proteomics experiments, do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue.", "after_revision": "Determining the full complement of protein-coding genes is a key goal of genome annotation . The most powerful approach for confirming protein coding potential is the detection of cellular protein expression through peptide mass spectrometry experiments . Here we map the peptides detected in 7 large-scale proteomics studies to almost 60\\% of the protein coding genes in the GENCODE annotation the human genome . We find the age of the gene family and its conservation across vertebrate species are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is little or no evidence of protein expression for novel genes, those that have appeared since primates , or genes that do not have any protein-like features or cross-species conservation. We identify 19 non-protein like features such as weak conservation, no protein-like features or ambiguous annotations in the major databases that are indicators of low peptide detection rates. We use these features to describe a set of 2,001 genes that are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3\\% of these genes. We suggest that many of these 2,001 genes do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue.", "edit_actions": [{"type": "R", "before": "The determination of whether a gene is protein coding or not is", "after": "Determining the full complement of protein-coding genes is", "start_char_pos": 0, "end_char_pos": 63}, {"type": "R", "before": "projects. Peptide mass spectrometry is a powerful tool for detecting cellular expression of proteins and therefore is an attractive approach for verifying the", "after": ". The most powerful approach for confirming", "start_char_pos": 96, "end_char_pos": 254}, {"type": "R", "before": "of genes. However, a range of technical difficulties limit the coverage from proteomics experiments, with the highest recorded coverage of the human proteome being approximately 50\\% of human genes", "after": "is the detection of cellular protein expression through peptide mass spectrometry experiments", "start_char_pos": 280, "end_char_pos": 477}, {"type": "D", "before": "the GENCODE v12 annotation of the human genome and identify", "after": null, "start_char_pos": 553, "end_char_pos": 612}, {"type": "A", "before": null, "after": "in the GENCODE annotation the human genome", "start_char_pos": 653, "end_char_pos": 653}, {"type": "R", "before": "that there are surprisingly strong correlations between peptide detection and cross-species conservation, gene age and the presence of protein-like features. The", "after": "the", "start_char_pos": 664, "end_char_pos": 825}, {"type": "A", "before": null, "after": "family", "start_char_pos": 842, "end_char_pos": 842}, {"type": "R", "before": "for", "after": "of", "start_char_pos": 1132, "end_char_pos": 1135}, {"type": "R", "before": "genes", "after": "novel genes, those", "start_char_pos": 1159, "end_char_pos": 1164}, {"type": "R", "before": "or", "after": ", or genes", "start_char_pos": 1199, "end_char_pos": 1201}, {"type": "R", "before": "conservation. Based on our results we", "after": "cross-species conservation. We identify 19 non-protein like features such as weak conservation, no protein-like features or ambiguous annotations in the major databases that are indicators of low peptide detection rates. We use these features to", "start_char_pos": 1248, "end_char_pos": 1285}, {"type": "R", "before": "have no protein features and poor conservation, or have ambiguous annotations in gene or protein databases. We", "after": "are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3\\% of these genes. We", "start_char_pos": 1321, "end_char_pos": 1431}, {"type": "R", "before": "the genes that lack supporting evidence and that are not detected in proteomics experiments,", "after": "these 2,001 genes", "start_char_pos": 1453, "end_char_pos": 1545}], "sents_char_pos": [0, 105, 289, 479, 655, 821, 973, 1083, 1261, 1428]} {"doc_id": "1401.4913", "revision_depth": "1", "before_revision": "Networks are widely used to represent the interaction pattern in complex systems. Structures of real networks from different domain may vary quite significantly , and so the dynamicson the network. Which also plays an important role in communication and information spreading on networks . Here we have explored underlying structures of different biological networks which support faster spreading and are better in communication between nodes in them. In this regard, we have analyzed the good expansion property , from large spectral gap , and communicability between nodes in those networks . Different epidemic models are also used to study the same. Moreover we have explored the structural conformation and properties which might be cause for better communication. Our study shows that the underlying topology in neural networks is significantly qualitatively different than the same from other biological networks and they may have evolved such a way that they inherit a structure which is excellent and robust in communication.", "after_revision": "Networks are widely used to represent interaction pattern among the components in complex systems. Structures of real networks from differ- ent domains may vary quite significantly . Since there is an interplay be- tween network architecture and dynamics, structure plays an important role in communication and information spreading on a network . Here we investigate the underlying undirected topology of different biological networks which support faster spreading of information and are better in communication . We analyze the good expansion property by using the spectral gap and communicability between nodes . Different epidemic models are also used to study the transmission of information in terms of disease spreading through individuals (nodes) in those networks. More- over, we explore the structural conformation and properties which may be responsible for better communication. Among all biological networks studied here, the undirected structure of neuronal networks not only pos- sesses the small-world property but the same is expressed remarkably to a higher degree than any randomly generated network which possesses the same degree sequence. A relatively high percentage of nodes, in neuronal networks, form a higher core in their structure. Our study shows that the underlying undirected topology in neuronal networks is significantly qualitatively different than the same from other biological networks and that they may have evolved in such a way that they inherit a (undirected) structure which is excellent and robust in communication.", "edit_actions": [{"type": "R", "before": "the interaction pattern", "after": "interaction pattern among the components", "start_char_pos": 38, "end_char_pos": 61}, {"type": "R", "before": "different domain", "after": "differ- ent domains", "start_char_pos": 115, "end_char_pos": 131}, {"type": "R", "before": ", and so the dynamicson the network. Which also", "after": ". Since there is an interplay be- tween network architecture and dynamics, structure", "start_char_pos": 161, "end_char_pos": 208}, {"type": "R", "before": "networks", "after": "a network", "start_char_pos": 279, "end_char_pos": 287}, {"type": "R", "before": "have explored underlying structures", "after": "investigate the underlying undirected topology", "start_char_pos": 298, "end_char_pos": 333}, {"type": "A", "before": null, "after": "of information", "start_char_pos": 398, "end_char_pos": 398}, {"type": "R", "before": "between nodes in them. In this regard, we have analyzed", "after": ". We analyze", "start_char_pos": 431, "end_char_pos": 486}, {"type": "R", "before": ", from large spectral gap ,", "after": "by using the spectral gap", "start_char_pos": 515, "end_char_pos": 542}, {"type": "D", "before": "in those networks", "after": null, "start_char_pos": 577, "end_char_pos": 594}, {"type": "R", "before": "same. Moreover we have explored", "after": "transmission of information in terms of disease spreading through individuals (nodes) in those networks. More- over, we explore", "start_char_pos": 650, "end_char_pos": 681}, {"type": "R", "before": "might be cause", "after": "may be responsible", "start_char_pos": 731, "end_char_pos": 745}, {"type": "A", "before": null, "after": "Among all biological networks studied here, the undirected structure of neuronal networks not only pos- sesses the small-world property but the same is expressed remarkably to a higher degree than any randomly generated network which possesses the same degree sequence. A relatively high percentage of nodes, in neuronal networks, form a higher core in their structure.", "start_char_pos": 772, "end_char_pos": 772}, {"type": "R", "before": "topology in neural", "after": "undirected topology in neuronal", "start_char_pos": 809, "end_char_pos": 827}, {"type": "A", "before": null, "after": "that", "start_char_pos": 927, "end_char_pos": 927}, {"type": "A", "before": null, "after": "in", "start_char_pos": 950, "end_char_pos": 950}, {"type": "A", "before": null, "after": "(undirected)", "start_char_pos": 982, "end_char_pos": 982}], "sents_char_pos": [0, 81, 197, 289, 453, 655, 771]} {"doc_id": "1401.8026", "revision_depth": "2", "before_revision": "Financial markets are exposed to systemic risk (SR), the risk that a major fraction of the system ceases to function and collapses. Since recently it is possible to quantify SR in terms of underlying financial networks where nodes represent financial institutions, and links capture the size and maturity of assets (loans), liabilities, and other obligations such as derivatives. We show that it is possible to quantify the share of SR that individual liabilities in a financial network contribute to the overall SR. We use empirical data of nation-wide interbank liabilities to show that a few liabilities carry the major fraction of the overall SR . We propose a tax on individual transactions that is proportional to their contribution to overall SR. If a transaction does not increase SR it is tax free . With an agent based model (CRISIS macro-financial model) we demonstrate that the proposed Systemic Risk Tax (SRT) leads to a URLanized re-structuring of financial networks that are practically free of SR. ABM predictions agree remarkably well with the empirical data and can be used to understand the relation of credit risk and SR.", "after_revision": "Financial markets are exposed to systemic risk (SR), the risk that a major fraction of the system ceases to function , and collapses. It has recently become possible to quantify SR in terms of underlying financial networks where nodes represent financial institutions, and links capture the size and maturity of assets (loans), liabilities, and other obligations , such as derivatives. We demonstrate that it is possible to quantify the share of SR that individual liabilities within a financial network contribute to the overall SR. We use empirical data of nationwide interbank liabilities to show that the marginal contribution to overall SR of liabilities for a given size varies by a factor of a thousand . We propose a tax on individual transactions that is proportional to their marginal contribution to overall SR. If a transaction does not increase SR it is tax-free . With an agent-based model (CRISIS macro-financial model) we demonstrate that the proposed \" Systemic Risk Tax \" (SRT) leads to a URLanised restructuring of financial networks that are practically free of SR. The SRT can be seen as an insurance for the public against costs arising from cascading failure. ABM predictions are shown to be in remarkable agreement with the empirical data and can be used to understand the relation of credit risk and SR.", "edit_actions": [{"type": "A", "before": null, "after": ",", "start_char_pos": 117, "end_char_pos": 117}, {"type": "R", "before": "Since recently it is", "after": "It has recently become", "start_char_pos": 133, "end_char_pos": 153}, {"type": "A", "before": null, "after": ",", "start_char_pos": 360, "end_char_pos": 360}, {"type": "R", "before": "show", "after": "demonstrate", "start_char_pos": 385, "end_char_pos": 389}, {"type": "R", "before": "in", "after": "within", "start_char_pos": 466, "end_char_pos": 468}, {"type": "R", "before": "nation-wide", "after": "nationwide", "start_char_pos": 544, "end_char_pos": 555}, {"type": "R", "before": "a few liabilities carry the major fraction of the overall SR", "after": "the marginal contribution to overall SR of liabilities for a given size varies by a factor of a thousand", "start_char_pos": 591, "end_char_pos": 651}, {"type": "A", "before": null, "after": "marginal", "start_char_pos": 728, "end_char_pos": 728}, {"type": "R", "before": "tax free", "after": "tax-free", "start_char_pos": 801, "end_char_pos": 809}, {"type": "R", "before": "agent based", "after": "agent-based", "start_char_pos": 820, "end_char_pos": 831}, {"type": "A", "before": null, "after": "\"", "start_char_pos": 902, "end_char_pos": 902}, {"type": "A", "before": null, "after": "\"", "start_char_pos": 921, "end_char_pos": 921}, {"type": "R", "before": "URLanized re-structuring", "after": "URLanised restructuring", "start_char_pos": 939, "end_char_pos": 963}, {"type": "R", "before": "ABM predictions agree remarkably well", "after": "The SRT can be seen as an insurance for the public against costs arising from cascading failure. ABM predictions are shown to be in remarkable agreement", "start_char_pos": 1019, "end_char_pos": 1056}], "sents_char_pos": [0, 132, 381, 518, 756, 811]} {"doc_id": "1403.3459", "revision_depth": "1", "before_revision": "It has been understood that the `` local\" existence of the Markowitz' optimal portfolio or the solution to the local risk minimization problem is guaranteed by some specific mathematical structures on the underlying assets price processes (called ``Structure Conditions\" in the literature ){\\it . In this paper, we consider a semi-martingale market model (initial market model) fulfilling these structures , and an arbitrary random time that is not adapted to the flow of the ``public\" information. By adding additional uncertainty to the initial market model, via this random time, those structures may fail{\\it . Our aim is to address the question of how this random time will affect these structures from different perspectives. Our analysis allowed us to conclude that under some mild assumptions on the market model and the random time, these structures will remain valid on the one hand. Furthermore, we provide two examples illustrating the importance of these assumptions. On the other hand, we describe the random time models for which these structure conditions are preserved for any market model. These results are elaborated separately for the two contexts of stopping with random time and incorporating totally a specific class of random times respectively.", "after_revision": "It has been understood that the \" local\" existence of the Markowitz' optimal portfolio or the solution to the local-risk minimization problem is guaranteed by some specific mathematical structures on the underlying assets price processes known in the literature as \"{\\it Structure Conditions \" . In this paper, we consider a semi-martingale market model , and an arbitrary random time that is not adapted to the information flow of the market model. This random time may model the default time of a firm, the death time of an insured, or any the occurrence time of an event that might impact the market model somehow. By adding additional uncertainty to the market model, via this random time, the{\\it structures conditions may fail and hence the Markowitz's optimal portfolio and other quadratic-optimal portfolios might fail to exist . Our aim is to investigate the impact of this random time on the structures conditions from different perspectives. Our analysis allows us to conclude that under some mild assumptions on the market model and the random time, these structures conditions will remain valid on the one hand. Furthermore, we provide two examples illustrating the importance of these assumptions. On the other hand, we describe the random time models for which these structure conditions are preserved for any market model. These results are elaborated separately for the two contexts of stopping with the random time and incorporating totally a specific class of random times respectively.", "edit_actions": [{"type": "R", "before": "``", "after": "\"", "start_char_pos": 32, "end_char_pos": 34}, {"type": "R", "before": "local risk", "after": "local-risk", "start_char_pos": 111, "end_char_pos": 121}, {"type": "R", "before": "(called ``Structure Conditions\"", "after": "known", "start_char_pos": 239, "end_char_pos": 270}, {"type": "R", "before": ")", "after": "as \"", "start_char_pos": 289, "end_char_pos": 290}, {"type": "A", "before": null, "after": "Structure Conditions", "start_char_pos": 295, "end_char_pos": 295}, {"type": "A", "before": null, "after": "\"", "start_char_pos": 296, "end_char_pos": 296}, {"type": "D", "before": "(initial market model) fulfilling these structures", "after": null, "start_char_pos": 357, "end_char_pos": 407}, {"type": "A", "before": null, "after": "information", "start_char_pos": 466, "end_char_pos": 466}, {"type": "R", "before": "``public\" information.", "after": "market model. This random time may model the default time of a firm, the death time of an insured, or any the occurrence time of an event that might impact the market model somehow.", "start_char_pos": 479, "end_char_pos": 501}, {"type": "D", "before": "initial", "after": null, "start_char_pos": 542, "end_char_pos": 549}, {"type": "R", "before": "those structures may fail", "after": "the", "start_char_pos": 586, "end_char_pos": 611}, {"type": "A", "before": null, "after": "structures conditions", "start_char_pos": 616, "end_char_pos": 616}, {"type": "A", "before": null, "after": "may fail and hence the Markowitz's optimal portfolio and other quadratic-optimal portfolios might fail to exist", "start_char_pos": 617, "end_char_pos": 617}, {"type": "R", "before": "address the question of how", "after": "investigate the impact of", "start_char_pos": 634, "end_char_pos": 661}, {"type": "R", "before": "will affect these structures", "after": "on the structures conditions", "start_char_pos": 679, "end_char_pos": 707}, {"type": "R", "before": "allowed", "after": "allows", "start_char_pos": 750, "end_char_pos": 757}, {"type": "A", "before": null, "after": "conditions", "start_char_pos": 864, "end_char_pos": 864}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1192, "end_char_pos": 1192}], "sents_char_pos": [0, 298, 501, 619, 736, 899, 986, 1113]} {"doc_id": "1406.6951", "revision_depth": "3", "before_revision": "In this paper we consider the optimal transport approach for computing the model-free prices of a given path-dependent contingent claim in a two periods model. More precisely, we first specialize the optimal transport plan introduced in \\mbox{%DIFAUXCMD BeiglJuil0pt%DIFAUXCMD , following the construction of \\mbox{%DIFAUXCMD BrenierMartingale}0pt%DIFAUXCMD , } as well as the one 0pt%DIFAUXCMD and further studied in \\mbox{%DIFAUXCMD BrenierMartingale}\\hspace{0pt}%DIFAUXCMD . We show that, } in \\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013}\\hspace{0pt}%DIFAUXCMD , to } the case of positive martingales and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the change of numeraire on the corresponding super and subhedging model-free prices. It turns out that, for \\mbox{%DIFAUXCMD \\cite{BrenierMartingale}\\hspace{0pt}%DIFAUXCMD 's construction, the change of numeraire can be viewed as a mirror coupling for positive martingales, while for \\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013} }\\hspace{0pt}%DIFAUXCMD it } }\\hspace{0pt}%DIFAUXCMD } exchanges forward start straddles of type I and type II giving also that the optimal transport plan in the subhedging problems is the same for both types of options. Some numerical applications are}\\hspace{0pt}%DIFAUXCMD 's construction, the right monotone transference plan can be viewed as a mirror coupling of its left counterpart under the change of numeraire. An application to stochastic volatility models is also } provided.", "after_revision": "In this paper we apply change of numeraire techniques to the optimal transport approach for computing model-free prices of derivatives in a two periods model. In particular, we consider the optimal transport plan 0pt%DIFAUXCMD , following the construction of \\mbox{%DIFAUXCMD BrenierMartingale}0pt%DIFAUXCMD , } constructed in \\mbox{%DIFAUXCMD HobsonKlimmek2013 as well as the one introduced in \\mbox{%DIFAUXCMD BeiglJuil0pt%DIFAUXCMD and further studied in \\mbox{%DIFAUXCMD BrenierMartingale}\\hspace{0pt}%DIFAUXCMD . We show that, } in }\\hspace{0pt}%DIFAUXCMD , to } the case of positive martingales , a suitable change of numeraire }\\hspace{0pt}%DIFAUXCMD 's construction, the change of numeraire can be viewed as a mirror coupling for positive martingales, while for \\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013} }\\hspace{0pt}%DIFAUXCMD it } applied to \\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013 }\\hspace{0pt}%DIFAUXCMD } exchanges forward start straddles of type I and type II , so that the optimal transport plan in the subhedging problems is the same for both types of options. Moreover, for \\mbox{%DIFAUXCMD \\cite{BrenierMartingale}\\hspace{0pt}%DIFAUXCMD 's construction, the right monotone transference plan can be viewed as a mirror coupling of its left counterpart under the change of numeraire. An application to stochastic volatility models is also } provided.", "edit_actions": [{"type": "R", "before": "consider", "after": "apply change of numeraire techniques to", "start_char_pos": 17, "end_char_pos": 25}, {"type": "D", "before": "the", "after": null, "start_char_pos": 71, "end_char_pos": 74}, {"type": "R", "before": "a given path-dependent contingent claim", "after": "derivatives", "start_char_pos": 96, "end_char_pos": 135}, {"type": "R", "before": "More precisely, we first specialize", "after": "In particular, we consider", "start_char_pos": 160, "end_char_pos": 195}, {"type": "D", "before": "introduced in \\mbox{%DIFAUXCMD BeiglJuil", "after": null, "start_char_pos": 223, "end_char_pos": 263}, {"type": "A", "before": null, "after": "constructed in \\mbox{%DIFAUXCMD HobsonKlimmek2013", "start_char_pos": 362, "end_char_pos": 362}, {"type": "A", "before": null, "after": "introduced in \\mbox{%DIFAUXCMD BeiglJuil", "start_char_pos": 382, "end_char_pos": 382}, {"type": "D", "before": "\\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013", "after": null, "start_char_pos": 498, "end_char_pos": 538}, {"type": "R", "before": "and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the", "after": ", a suitable", "start_char_pos": 602, "end_char_pos": 746}, {"type": "D", "before": "on the corresponding super and subhedging model-free prices. It turns out that, for \\mbox{%DIFAUXCMD \\cite{BrenierMartingale", "after": null, "start_char_pos": 767, "end_char_pos": 891}, {"type": "A", "before": null, "after": "applied to \\mbox{%DIFAUXCMD \\cite{HobsonKlimmek2013", "start_char_pos": 1098, "end_char_pos": 1098}, {"type": "R", "before": "giving also", "after": ", so", "start_char_pos": 1181, "end_char_pos": 1192}, {"type": "R", "before": "Some numerical applications are", "after": "Moreover, for \\mbox{%DIFAUXCMD \\cite{BrenierMartingale", "start_char_pos": 1291, "end_char_pos": 1322}], "sents_char_pos": [0, 159, 175, 663, 686, 827, 1290, 1489]} {"doc_id": "1407.4374", "revision_depth": "3", "before_revision": "Target identification aims at identifying biomolecules whose function should be therapeutically altered to cure the considered pathology. An algorithm for in silico target identification using boolean network attractorsis proposed . It assumes that attractors correspond to phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed networks to avoid attractors associated with pathological phenotypes. The algorithm is tested on a boolean model of the mammalian cell cycle and its applications are illustrated on a boolean model of Fanconi anemia. Results show that the algorithm returns target combinations able to remove attractors associated with pathological phenotypes and then succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice . Nevertheless, it is expected that the algorithm is of interest for target identification .", "after_revision": "Target identification , one of the steps of drug discovery, aims at identifying biomolecules whose function should be therapeutically altered in order to cure the considered pathology. This work proposes an algorithm for in silico target identification using Boolean network attractors . It assumes that attractors of dynamical systems, such as Boolean networks, correspond to phenotypes produced by the modeled biological system. Under this assumption, and given a Boolean network modeling a pathophysiology, the algorithm identifies target combinations able to remove attractors associated with pathological phenotypes. It is tested on a Boolean model of the mammalian cell cycle bearing a constitutive inactivation of the retinoblastoma protein, as seen in cancers, and its applications are illustrated on a Boolean model of Fanconi anemia. The results show that the algorithm returns target combinations able to remove attractors associated with pathological phenotypes and then succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice , thus requiring it to be used in combination with wet lab experiments . Nevertheless, it is expected that the algorithm is of interest for target identification , notably by exploiting the inexpensiveness and predictive power of computational approaches to optimize the efficiency of costly wet lab experiments .", "edit_actions": [{"type": "A", "before": null, "after": ", one of the steps of drug discovery,", "start_char_pos": 22, "end_char_pos": 22}, {"type": "A", "before": null, "after": "in order", "start_char_pos": 105, "end_char_pos": 105}, {"type": "R", "before": "An", "after": "This work proposes an", "start_char_pos": 140, "end_char_pos": 142}, {"type": "R", "before": "boolean network attractorsis proposed", "after": "Boolean network attractors", "start_char_pos": 195, "end_char_pos": 232}, {"type": "A", "before": null, "after": "of dynamical systems, such as Boolean networks,", "start_char_pos": 262, "end_char_pos": 262}, {"type": "R", "before": "network. It", "after": "system. Under this assumption, and given a Boolean network modeling a pathophysiology, the algorithm", "start_char_pos": 323, "end_char_pos": 334}, {"type": "R", "before": "which allow disturbed networks to avoid", "after": "able to remove", "start_char_pos": 366, "end_char_pos": 405}, {"type": "R", "before": "The algorithm", "after": "It", "start_char_pos": 458, "end_char_pos": 471}, {"type": "R", "before": "boolean", "after": "Boolean", "start_char_pos": 487, "end_char_pos": 494}, {"type": "A", "before": null, "after": "bearing a constitutive inactivation of the retinoblastoma protein, as seen in cancers,", "start_char_pos": 529, "end_char_pos": 529}, {"type": "R", "before": "boolean", "after": "Boolean", "start_char_pos": 572, "end_char_pos": 579}, {"type": "R", "before": "Results", "after": "The results", "start_char_pos": 605, "end_char_pos": 612}, {"type": "A", "before": null, "after": ", thus requiring it to be used in combination with wet lab experiments", "start_char_pos": 905, "end_char_pos": 905}, {"type": "A", "before": null, "after": ", notably by exploiting the inexpensiveness and predictive power of computational approaches to optimize the efficiency of costly wet lab experiments", "start_char_pos": 997, "end_char_pos": 997}], "sents_char_pos": [0, 139, 234, 331, 457, 604, 808, 907]} {"doc_id": "1408.2725", "revision_depth": "1", "before_revision": "Biotechnological expertise and related tools (required e.g. for drug synthesis) increases daily mainly driven by a continuum of tumultuous experimental breakthroughs . In particular, recently, scientists have been able to build in-vitro reaction kinetics ( among enzymatic proteins and their ligands{\\em ) which code for two-inputs logic gates mimicking the stochastic AND ( NAND) and the stochastic OR ( NOR), beyond simpler and already known single-input gates ( as YES and NOT), the whole triggering prompt effort even from the theoretical counterpart. To this task, several allosteric receptor-ligand systems are hereby described according to the Monod-Wyman-Changeaux model: with the purpose of exploring their concrete functional capabilities to express logical operators and/or perform logical operations , we revise the concept of cooperativity for allosteric systems trough an extensive treatment of their statistical mechanical formulation ( with particular emphasis on the ranges and scaling of the involved parameters actually missing in the Literature) and we show how these reactions may successfully encode logical computing, beyond the YES and NOT gates, playing also as stochastic version of the AND (NAND) and the OR (NOR)operators .", "after_revision": "Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics ( the so called{\\em enzyme based logic ) which code for two-inputs logic gates and mimic the stochastic AND ( and NAND) as well as the stochastic OR ( and NOR). This accomplishment, together with the already-known single-input gates ( performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. The investigation of this field would enormously benefit from a self-consistent, predictive, theoretical framework. Here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express logical operators and/or perform logical operations . Mixing statistical mechanics with logics, and quantitatively our findings with the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems , with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity) .", "edit_actions": [{"type": "R", "before": "Biotechnological expertise and related tools (required e.g. for drug synthesis) increases daily mainly driven by a continuum of tumultuous experimental breakthroughs . In particular, recently, scientists have been able to build", "after": "Recent experimental breakthroughs have finally allowed to implement", "start_char_pos": 0, "end_char_pos": 227}, {"type": "R", "before": "among enzymatic proteins and their ligands", "after": "the so called", "start_char_pos": 257, "end_char_pos": 299}, {"type": "A", "before": null, "after": "enzyme based logic", "start_char_pos": 304, "end_char_pos": 304}, {"type": "R", "before": "mimicking", "after": "and mimic", "start_char_pos": 345, "end_char_pos": 354}, {"type": "R", "before": "NAND) and", "after": "and NAND) as well as", "start_char_pos": 376, "end_char_pos": 385}, {"type": "R", "before": "NOR), beyond simpler and already known", "after": "and NOR). This accomplishment, together with the already-known", "start_char_pos": 406, "end_char_pos": 444}, {"type": "A", "before": null, "after": "performing", "start_char_pos": 466, "end_char_pos": 466}, {"type": "A", "before": null, "after": "provides a logic base and paves the way to the development of powerful biotechnological devices. The investigation of this field would enormously benefit from a self-consistent, predictive, theoretical framework. Here we formulate a complete statistical mechanical description of", "start_char_pos": 484, "end_char_pos": 484}, {"type": "D", "before": "whole triggering prompt effort even from the theoretical counterpart. To this task, several allosteric receptor-ligand systems are hereby described according to the", "after": null, "start_char_pos": 489, "end_char_pos": 653}, {"type": "R", "before": "model:", "after": "allosteric model for both single and double ligand systems,", "start_char_pos": 676, "end_char_pos": 682}, {"type": "R", "before": "concrete functional", "after": "practical", "start_char_pos": 719, "end_char_pos": 738}, {"type": "R", "before": ", we", "after": ". Mixing statistical mechanics with logics, and quantitatively our findings with the available biochemical data, we successfully", "start_char_pos": 815, "end_char_pos": 819}, {"type": "A", "before": null, "after": "(and anti-cooperativity)", "start_char_pos": 856, "end_char_pos": 856}, {"type": "R", "before": "trough an extensive treatment of their statistical mechanical formulation (", "after": ",", "start_char_pos": 880, "end_char_pos": 955}, {"type": "R", "before": "the", "after": "its computational capabilities, the related", "start_char_pos": 984, "end_char_pos": 987}, {"type": "R", "before": "actually missing in the Literature) and we show how these reactions may successfully encode logical computing, beyond the YES and NOT gates, playing also as stochastic version of the AND (NAND) and the OR (NOR)operators", "after": "and its differences with classical cooperativity (and anti-cooperativity)", "start_char_pos": 1034, "end_char_pos": 1253}], "sents_char_pos": [0, 167, 558, 682, 1069]} {"doc_id": "1410.0104", "revision_depth": "1", "before_revision": "Financial networks are dynamic and to assess systemic importance and avert losses we needs models which take the time variations of the links and nodes into account. We develop a model that can predict the response of the financial network to a shock and propose a measure for the systemic importance of the banks, which we call BankRank. Using the European Bank Authority 2011 stress test exposure data, we apply our model to the bipartite network of the largest institutional holders of troubled European countries (Greece, Italy, Portugal, Spain, and Ireland). Simulation of the states in our model reveal that it has \" calm \" state , where shocks do not cause very major losses, and \" panicked \" states, in which devastating damages occur. Fitting the parameters to Eurocrisis data shows that , before the crisis, the system was mostly in the \" calm \" regime while during the Eurocrisis it went into the \" panicked \" regime. The numerical solutions of the our model fit to a good degree to what really happened in the crisis. We also find that, while the largest holders are usually more important, sometimes smaller holders also exhibit systemic importance. In addition, we observe that asset diversification has no clear correlation with our BankRank. Thus diversification is neither reducing systemic, nor necessarily providing routes for contagion. These suggest that our model may provide a useful tool for determining the vulnerability of banks and assets to shocks and for simulating shared portfolio networks in general .", "after_revision": "Financial networks are dynamic . To assess their systemic importance to the world-wide economic network and avert losses we need models that take the time variations of the links and nodes into account. Using the methodology of classical mechanics and Laplacian determinism we develop a model that can predict the response of the financial network to a shock . We also propose a way of measuring the systemic importance of the banks, which we call BankRank. Using European Bank Authority 2011 stress test exposure data, we apply our model to the bipartite network connecting the largest institutional debt holders of the troubled European countries (Greece, Italy, Portugal, Spain, and Ireland). From simulating our model we can determine whether a network is in a \" stable \" state in which shocks do not cause major losses, or a \" unstable \" state in which devastating damages occur. Fitting the parameters of the model, which play the role of physical coupling constants, to Eurozone crisis data shows that before the Eurozone crisis the system was mostly in a \" stable \" regime , and that during the crisis it transitioned into an \" unstable \" regime. The numerical solutions produced by our model match closely the actual time-line of events of the crisis. We also find that, while the largest holders are usually more important, in the unstable regime smaller holders also exhibit systemic importance. Our model also proves useful for determining the vulnerability of banks and assets to shocks . This suggests that our model may be a useful tool for simulating the response dynamics of shared portfolio networks .", "edit_actions": [{"type": "R", "before": "and to assess systemic importance", "after": ". To assess their systemic importance to the world-wide economic network", "start_char_pos": 31, "end_char_pos": 64}, {"type": "R", "before": "needs models which", "after": "need models that", "start_char_pos": 85, "end_char_pos": 103}, {"type": "R", "before": "We", "after": "Using the methodology of classical mechanics and Laplacian determinism we", "start_char_pos": 166, "end_char_pos": 168}, {"type": "R", "before": "and propose a measure for", "after": ". We also propose a way of measuring", "start_char_pos": 251, "end_char_pos": 276}, {"type": "D", "before": "the", "after": null, "start_char_pos": 345, "end_char_pos": 348}, {"type": "R", "before": "of", "after": "connecting", "start_char_pos": 449, "end_char_pos": 451}, {"type": "R", "before": "holders of", "after": "debt holders of the", "start_char_pos": 478, "end_char_pos": 488}, {"type": "R", "before": "Simulation of the states in our model reveal that it has", "after": "From simulating our model we can determine whether a network is in a", "start_char_pos": 564, "end_char_pos": 620}, {"type": "R", "before": "calm", "after": "stable", "start_char_pos": 623, "end_char_pos": 627}, {"type": "R", "before": ", where", "after": "in which", "start_char_pos": 636, "end_char_pos": 643}, {"type": "D", "before": "very", "after": null, "start_char_pos": 664, "end_char_pos": 668}, {"type": "R", "before": "and", "after": "or a", "start_char_pos": 683, "end_char_pos": 686}, {"type": "R", "before": "panicked", "after": "unstable", "start_char_pos": 689, "end_char_pos": 697}, {"type": "R", "before": "states,", "after": "state", "start_char_pos": 700, "end_char_pos": 707}, {"type": "R", "before": "to Eurocrisis", "after": "of the model, which play the role of physical coupling constants, to Eurozone crisis", "start_char_pos": 767, "end_char_pos": 780}, {"type": "R", "before": ", before the crisis,", "after": "before", "start_char_pos": 797, "end_char_pos": 817}, {"type": "A", "before": null, "after": "Eurozone crisis the", "start_char_pos": 822, "end_char_pos": 822}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 844, "end_char_pos": 847}, {"type": "R", "before": "calm", "after": "stable", "start_char_pos": 850, "end_char_pos": 854}, {"type": "R", "before": "while during the Eurocrisis it went into the", "after": ", and that during the crisis it transitioned into an", "start_char_pos": 864, "end_char_pos": 908}, {"type": "R", "before": "panicked", "after": "unstable", "start_char_pos": 911, "end_char_pos": 919}, {"type": "R", "before": "of the our model fit to a good degree to what really happened in the", "after": "produced by our model match closely the actual time-line of events of the", "start_char_pos": 954, "end_char_pos": 1022}, {"type": "R", "before": "sometimes", "after": "in the unstable regime", "start_char_pos": 1104, "end_char_pos": 1113}, {"type": "R", "before": "In addition, we observe that asset diversification has no clear correlation with our BankRank. Thus diversification is neither reducing systemic, nor necessarily providing routes for contagion. These suggest that our model may provide a useful tool", "after": "Our model also proves useful", "start_char_pos": 1164, "end_char_pos": 1412}, {"type": "R", "before": "and for simulating", "after": ". This suggests that our model may be a useful tool for simulating the response dynamics of", "start_char_pos": 1477, "end_char_pos": 1495}, {"type": "D", "before": "in general", "after": null, "start_char_pos": 1522, "end_char_pos": 1532}], "sents_char_pos": [0, 165, 338, 563, 743, 929, 1030, 1163, 1258, 1357]} {"doc_id": "1410.0594", "revision_depth": "1", "before_revision": "In the present work we study the solution existence for a generalized Dynkin game of switching type which is shown to be the natural representation for general defaultable OTC contract in which acontingent CSA has been set between the parties . This is a theoretical counterparty risk mitigation mechanism that allows the counterparty of a general OTC contract to switch from zero to full/perfect collateralization and switch back whenever she wants until contract maturity paying some switching costs and taking into account the running costs that emerge over time. In this paper we allow for the strategic interaction between the counterparties of the underlying contract, which makes the problem solution much more tough. We are motivated in this research by the importance to show the economic sense - in terms of optimal contract design - of a contingent counterparty risk mitigation mechanism like our one. In particular, we show that the existence of the solution and the game Nash equilibrium is connected with the solution of a system of non-linear reflected BSDE which remains an problem. We have also proved its existence under strong condition ( in the so called \\emph{symmetric case ) highlighting in conclusion some interesting applications in finance and future researches .", "after_revision": "We study the solution 's existence for a generalized Dynkin game of switching type which is shown to be the natural representation for general defaultable OTC contract with contingent CSA . This is a theoretical counterparty risk mitigation mechanism that allows the counterparty of a general OTC contract to switch from zero to full/perfect collateralization and switch back whenever she wants until contract maturity paying some switching costs and taking into account the running costs that emerge over time. In this paper we allow for the strategic interaction between the counterparties of the underlying contract, which makes the problem solution much more tough. We are motivated in this research by the importance to show the economic sense - in terms of optimal contract design - of a contingent counterparty risk mitigation mechanism like our one. In particular, we show that the existence of the solution and the game Nash equilibrium is connected with the solution of a system of non-linear reflected BSDE which remains an open problem. We then provide the basic ideas to numerically search the game equilibrium via an iterative optimal stopping approach and we show the existence of the solution for our problem under strong condition , in the so called \\emph{ symmetric case .", "edit_actions": [{"type": "R", "before": "In the present work we", "after": "We", "start_char_pos": 0, "end_char_pos": 22}, {"type": "A", "before": null, "after": "'s", "start_char_pos": 42, "end_char_pos": 42}, {"type": "D", "before": "defaultable OTC contract", "after": null, "start_char_pos": 161, "end_char_pos": 185}, {"type": "D", "before": "in which a", "after": null, "start_char_pos": 186, "end_char_pos": 196}, {"type": "D", "before": "contingent CSA", "after": null, "start_char_pos": 196, "end_char_pos": 210}, {"type": "R", "before": "has been set between the parties", "after": "defaultable OTC contract with contingent CSA", "start_char_pos": 211, "end_char_pos": 243}, {"type": "R", "before": "counterparty risk mitigation mechanism", "after": "counterparty risk mitigation mechanism", "start_char_pos": 268, "end_char_pos": 306}, {"type": "R", "before": "Nash equilibrium", "after": "Nash equilibrium", "start_char_pos": 985, "end_char_pos": 1001}, {"type": "A", "before": null, "after": "open", "start_char_pos": 1091, "end_char_pos": 1091}, {"type": "R", "before": "have also proved its existence", "after": "then provide the basic ideas to numerically search the game equilibrium via an iterative optimal stopping approach and we show the existence of the solution for our problem", "start_char_pos": 1104, "end_char_pos": 1134}, {"type": "R", "before": "(", "after": ",", "start_char_pos": 1158, "end_char_pos": 1159}, {"type": "D", "before": "symmetric case", "after": null, "start_char_pos": 1183, "end_char_pos": 1197}, {"type": "R", "before": ") highlighting in conclusion some interesting applications in finance and future researches", "after": "symmetric case", "start_char_pos": 1198, "end_char_pos": 1289}], "sents_char_pos": [0, 245, 567, 725, 913, 1100]} {"doc_id": "1410.4771", "revision_depth": "1", "before_revision": "URLanism from various kingdoms of life face the challenge of regulating their size. Despite decades of research, we still do not have a good understanding of the molecular mechanisms involved in this regulation, and how cells coordinate the different events of the cell cycle , such as growth, division and DNA replication is still unclear. Here, we report on experimental results for the budding yeast Saccharomyces cerevisiae and the bacterium Escherichia coli, showing that, remarkably, they share a common strategy for cell size control. We collected data on single-cell growth and cell cycle progression in S . cerevisiae in several growth media and estimated the distributions of ] ] size at birth and interdivision time as well as their correlations throughout cell lineages. We also performed the same analysis on previously collected data on single-cell growth and division in E. coli. The results are in quantitative agreement with the predictions of the incremental model , which leads to the ] addition of a constant volume ( up to fluctuations), independent of size at birth, between birth and division; we show that in URLanisms size at birth and size at division exhibit a linear relationship with slope one. This result, together with extended additional analysis supporting ] the incremental model, argues against the existing \"critical size\" paradigm for cell size control in bacteriaand yeast .", "after_revision": "To maintain a constant cell size, dividing cells have to coordinate cell cycle events with cell growth . This coordination has for long been supposed to rely on the existence of size thresholds determining cell cycle progression 1]. In budding yeast, size is controlled at the G1/S transition 11]. In agreement with this hypothesis, the size at birth influences the time spent in G1: smaller cells have a longer G1 period 3]. Nevertheless, even though cells born smaller have a longer G1, the compensation is imperfect and they still bud at smaller cell sizes. In bacteria, several recent studies have shown that the incremental model of size control, in which size is controlled by addition of a constant volume ( in contrast to a size threshold), is able to quantitatively explain the experimental data on 4 different bacterial species 6, 5, 6, 7]. Here, we report on experimental results for the budding yeast Saccharomyces cerevisiae, finding, surprisingly, that cell size control in URLanism is very well described by the incremental model, suggesting a common strategy for cell size control with bacteria. Additionally, we argue that for S. cerevisiae the volume increment is not added from birth to division, but rather between two budding events .", "edit_actions": [{"type": "R", "before": "URLanism from various kingdoms of life face the challenge of regulating their size. Despite decades of research, we still do not have a good understanding of the molecular mechanisms involved in this regulation, and how cells coordinate the different events of the cell cycle , such as growth, division and DNA replication is still unclear. Here, we report on experimental results for the budding yeast Saccharomyces cerevisiae and the bacterium Escherichia coli, showing that, remarkably, they share a common strategy for cell size control. We collected data on single-cell growth and", "after": "To maintain a constant cell size, dividing cells have to coordinate", "start_char_pos": 0, "end_char_pos": 585}, {"type": "R", "before": "progression in S", "after": "events with cell growth", "start_char_pos": 597, "end_char_pos": 613}, {"type": "R", "before": "cerevisiae in several growth media and estimated the distributions of", "after": "This coordination has for long been supposed to rely on the existence of size thresholds determining cell cycle progression", "start_char_pos": 616, "end_char_pos": 685}, {"type": "A", "before": null, "after": "1", "start_char_pos": 686, "end_char_pos": 686}, {"type": "A", "before": null, "after": ". In budding yeast, size is controlled at the G1/S transition", "start_char_pos": 687, "end_char_pos": 687}, {"type": "A", "before": null, "after": "11", "start_char_pos": 688, "end_char_pos": 688}, {"type": "A", "before": null, "after": ". In agreement with this hypothesis, the", "start_char_pos": 689, "end_char_pos": 689}, {"type": "R", "before": "and interdivision time as well as their correlations throughout cell lineages. We also performed the same analysis on previously collected data on single-cell growth and division in E. coli. The results are in quantitative agreement with the predictions of the incremental model , which leads to the", "after": "influences the time spent in G1: smaller cells have a longer G1 period", "start_char_pos": 704, "end_char_pos": 1003}, {"type": "A", "before": null, "after": "3", "start_char_pos": 1004, "end_char_pos": 1004}, {"type": "A", "before": null, "after": ". Nevertheless, even though cells born smaller have a longer G1, the compensation is imperfect and they still bud at smaller cell sizes. In bacteria, several recent studies have shown that the incremental model of size control, in which size is controlled by", "start_char_pos": 1005, "end_char_pos": 1005}, {"type": "R", "before": "up to fluctuations), independent of size at birth, between birth and division; we show that in URLanisms size at birth and size at division exhibit a linear relationship with slope one. This result, together with extended additional analysis supporting", "after": "in contrast to a size threshold), is able to quantitatively explain the experimental data on 4 different bacterial species", "start_char_pos": 1038, "end_char_pos": 1290}, {"type": "A", "before": null, "after": "6, 5, 6, 7", "start_char_pos": 1291, "end_char_pos": 1291}, {"type": "A", "before": null, "after": ". Here, we report on experimental results for the budding yeast Saccharomyces cerevisiae, finding, surprisingly, that cell size control in URLanism is very well described by", "start_char_pos": 1292, "end_char_pos": 1292}, {"type": "R", "before": "argues against the existing \"critical size\" paradigm", "after": "suggesting a common strategy", "start_char_pos": 1316, "end_char_pos": 1368}, {"type": "R", "before": "in bacteriaand yeast", "after": "with bacteria. Additionally, we argue that for S. cerevisiae the volume increment is not added from birth to division, but rather between two budding events", "start_char_pos": 1391, "end_char_pos": 1411}], "sents_char_pos": [0, 83, 340, 541, 782, 894, 1116, 1223]} {"doc_id": "1411.7613", "revision_depth": "1", "before_revision": "The assessment of fundamental properties for economic and financial systems, such as systemic risk, is systematically hindered by privacy issues-that put severe limitations on the available information . Here we introduce a novel method to reconstruct partially-accessible networked systemsof this kind. The method is based on the knowledge of the fitnesses, i.e., intrinsic node-specific properties , and of the number of connections of only a limited subset of nodes. Such information is used to calibrate a directed configuration model which can generate ensembles of networks intended to represent the real system, so that the real network properties can be estimated within the generated ensemblein terms of mean values of the observables . Here we focus on estimating those properties that are commonly used to measure the network resilience to shock and crashes. Tests on both artificial and empirical networks shows that the method is remarkably robust with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.", "after_revision": "We address a fundamental problem that is systematically encountered when modeling complex systems: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures . Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system, so that the real network properties can be estimated with their average values within the ensemble . Here we test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.", "edit_actions": [{"type": "R", "before": "The assessment of fundamental properties for", "after": "We address a fundamental problem that is systematically encountered when modeling complex systems: the limitedness of the information available. In the case of", "start_char_pos": 0, "end_char_pos": 44}, {"type": "R", "before": "systems, such as systemic risk, is systematically hindered by privacy issues-that put severe limitations on the available information", "after": "networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures", "start_char_pos": 68, "end_char_pos": 201}, {"type": "R", "before": "introduce a novel", "after": "present an innovative", "start_char_pos": 212, "end_char_pos": 229}, {"type": "A", "before": null, "after": "the structure of such", "start_char_pos": 252, "end_char_pos": 252}, {"type": "R", "before": "networked systemsof this kind. The method is", "after": "systems,", "start_char_pos": 274, "end_char_pos": 318}, {"type": "D", "before": "the fitnesses, i.e.,", "after": null, "start_char_pos": 345, "end_char_pos": 365}, {"type": "D", "before": ",", "after": null, "start_char_pos": 401, "end_char_pos": 402}, {"type": "R", "before": "Such", "after": "This", "start_char_pos": 471, "end_char_pos": 475}, {"type": "R", "before": "a directed configuration model which can", "after": "an inference procedure based on fundamental concepts derived from statistical physics, which allows to", "start_char_pos": 509, "end_char_pos": 549}, {"type": "A", "before": null, "after": "directed weighted", "start_char_pos": 572, "end_char_pos": 572}, {"type": "R", "before": "within the generated ensemblein terms of mean values of the observables", "after": "with their average values within the ensemble", "start_char_pos": 674, "end_char_pos": 745}, {"type": "R", "before": "focus on estimating those", "after": "test the method both on synthetic and empirical networks, focusing on the", "start_char_pos": 756, "end_char_pos": 781}, {"type": "R", "before": "the network resilience to shock and crashes. Tests on both artificial and empirical networks shows that the method is remarkably robust", "after": "systemic risk. Indeed, the method shows a remarkable robustness", "start_char_pos": 827, "end_char_pos": 962}], "sents_char_pos": [0, 203, 304, 470, 747, 871]} {"doc_id": "1412.3353", "revision_depth": "2", "before_revision": "The heterogeneity of reaction fluxes present in metabolic networks can be exploited to construct high-flux fluctuation backbones as reduced versions of metabolism. These backbones maintain all relevant information while displaying a substantially decreased number of interconnections and, hence, they become a useful tool to extract main metabolic highways and so to unveil important biological information. Here, we disclose the metabolic backbone of Escherichia coli using computationally predicted fluxes that maximize the growth rate in glucose medium, and we contrast it with the backbone of Mycoplasma pneumoniae, a much URLanism. We find that the core of both backbones are mainly composed of reactions in ancient pathways, meaning that those reactions still remain at present significant for biomass production. At the same time, a comparative analysis of E. coli backbones in different media leads to the identification of pathways sensitive to environmental changes. Backbones, as networks of metabolites connected by the most relevant fluxes, are thus useful to trace simultaneously both evolution and adaptation fingerprints in cell metabolism .", "after_revision": "The heterogeneity of reaction fluxes present in a metabolic network can be exploited to construct the high-flux fluctuation backbone as a reduced version of metabolism. The backbone maintains all relevant information while displaying a substantially decreased number of interconnections and, hence, it becomes a useful tool to extract relevant metabolic routes which unveil important biological information. Here, we disclose the metabolic backbone of Escherichia coli using the computationally predicted fluxes which maximize the growth rate in glucose minimal medium, and we contrast it with the backbone of Mycoplasma pneumoniae, a much URLanism. We find that the core of both backbones are mainly composed of reactions in ancient pathways, meaning that those reactions still retain at present the central role in the evolved metabolism. In E. coli, the analysis of the core reveals a dominant direction with the synthesis of purines and pyrimidines and the metabolism of lipids ensuing after energy metabolism. At the same time, a comparative analysis of the backbone of E. coli in different media leads to the identification of pathways sensitive to environmental changes. The metabolic backbone of URLanism, as a network of metabolites connected by the most relevant fluxes, is thus useful to trace simultaneously both its evolution and adaptation fingerprints .", "edit_actions": [{"type": "R", "before": "metabolic networks", "after": "a metabolic network", "start_char_pos": 48, "end_char_pos": 66}, {"type": "A", "before": null, "after": "the", "start_char_pos": 97, "end_char_pos": 97}, {"type": "R", "before": "backbones as reduced versions", "after": "backbone as a reduced version", "start_char_pos": 120, "end_char_pos": 149}, {"type": "R", "before": "These backbones maintain", "after": "The backbone maintains", "start_char_pos": 165, "end_char_pos": 189}, {"type": "R", "before": "they become", "after": "it becomes", "start_char_pos": 297, "end_char_pos": 308}, {"type": "R", "before": "main metabolic highways and so to", "after": "relevant metabolic routes which", "start_char_pos": 334, "end_char_pos": 367}, {"type": "A", "before": null, "after": "the", "start_char_pos": 476, "end_char_pos": 476}, {"type": "R", "before": "that", "after": "which", "start_char_pos": 510, "end_char_pos": 514}, {"type": "A", "before": null, "after": "minimal", "start_char_pos": 551, "end_char_pos": 551}, {"type": "R", "before": "remain at present significant for biomass production.", "after": "retain at present the central role in the evolved metabolism. In E. coli, the analysis of the core reveals a dominant direction with the synthesis of purines and pyrimidines and the metabolism of lipids ensuing after energy metabolism.", "start_char_pos": 769, "end_char_pos": 822}, {"type": "A", "before": null, "after": "the backbone of", "start_char_pos": 867, "end_char_pos": 867}, {"type": "D", "before": "backbones", "after": null, "start_char_pos": 876, "end_char_pos": 885}, {"type": "R", "before": "Backbones, as networks", "after": "The metabolic backbone of URLanism, as a network", "start_char_pos": 981, "end_char_pos": 1003}, {"type": "R", "before": "are", "after": "is", "start_char_pos": 1058, "end_char_pos": 1061}, {"type": "A", "before": null, "after": "its", "start_char_pos": 1103, "end_char_pos": 1103}, {"type": "D", "before": "in cell metabolism", "after": null, "start_char_pos": 1142, "end_char_pos": 1160}], "sents_char_pos": [0, 164, 408, 639, 822, 980]} {"doc_id": "1412.7059", "revision_depth": "1", "before_revision": "State-of-the-art emergency navigation approaches , which aim to evacuate civilians during a disaster , commonly make decisions in a real-time fashion with respect to one pre-defined algorithm and living sensory data. Hence, fatalities caused by the inappropriate guidance of an approach can only be unveiled until the end of an evacuation process and is impossible to be remedied. Previous research implies that the performance of routing algorithms for evacuation proposes are sensitive to initial distribution of evacuees, occupancy rate, disaster type as well as disaster location. In other words, a well performed algorithm in one scenario may achieve bad results in another scenario. This problem is especially serious in heuristic-based routing algorithms where results are affected by the configuration of certain parameters. Therefore, this paper proposes a simulation-based routing algorithm to realise near-optimal evacuations by making use of the high computational power of cloud servers. Rather than guiding evacuees with a routing algorithm directly , a robust Cognitive Packet Network based algorithm is first evaluated via a cloud-based simulator in a faster-than-real-time manner . Towards all the perished evacuees in the simulation, a variant of Dijkstra's algorithm is then employed to solve optimal paths for them and all the evacuees will finally follow the desired paths to exits. Furthermore, the \"tsunami of data\" phenomenon caused by simultaneous information exchanges among massive mobile devices and cloud servers is avoided as the proposed algorithm can calculate desired paths only based on the initial situation .", "after_revision": "State-of-the-art emergency navigation approaches are designed to evacuate civilians during a disaster based on real-time decisions using a pre-defined algorithm and live sensory data. Hence, casualties caused by the poor decisions and guidance are only apparent at the end of the evacuation process and cannot then be remedied. Previous research shows that the performance of routing algorithms for evacuation purposes are sensitive to the initial distribution of evacuees, the occupancy levels, the type of disaster and its as well its locations. Thus an algorithm that performs well in one scenario may achieve bad results in another scenario. This problem is especially serious in heuristic-based routing algorithms for evacuees where results are affected by the choice of certain parameters. Therefore, this paper proposes a simulation-based evacuee routing algorithm that optimises evacuation by making use of the high computational power of cloud servers. Rather than guiding evacuees with a predetermined routing algorithm , a robust Cognitive Packet Network based algorithm is first evaluated via a cloud-based simulator in a faster-than-real-time manner , and any \"simulated casualties\" are then re-routed using a variant of Dijkstra's algorithm to obtain new safe paths for them to exits. This approach can be iterated as long as corrective action is still possible .", "edit_actions": [{"type": "R", "before": ", which aim", "after": "are designed", "start_char_pos": 49, "end_char_pos": 60}, {"type": "R", "before": ", commonly make decisions in a real-time fashion with respect to one", "after": "based on real-time decisions using a", "start_char_pos": 101, "end_char_pos": 169}, {"type": "R", "before": "living", "after": "live", "start_char_pos": 196, "end_char_pos": 202}, {"type": "R", "before": "fatalities", "after": "casualties", "start_char_pos": 224, "end_char_pos": 234}, {"type": "R", "before": "inappropriate guidance of an approach can only be unveiled until", "after": "poor decisions and guidance are only apparent at", "start_char_pos": 249, "end_char_pos": 313}, {"type": "R", "before": "an", "after": "the", "start_char_pos": 325, "end_char_pos": 327}, {"type": "R", "before": "is impossible to", "after": "cannot then", "start_char_pos": 351, "end_char_pos": 367}, {"type": "R", "before": "implies", "after": "shows", "start_char_pos": 399, "end_char_pos": 406}, {"type": "R", "before": "proposes", "after": "purposes", "start_char_pos": 465, "end_char_pos": 473}, {"type": "A", "before": null, "after": "the", "start_char_pos": 491, "end_char_pos": 491}, {"type": "R", "before": "occupancy rate, disaster type as well as disaster location. In other words, a well performed algorithm", "after": "the occupancy levels, the type of disaster and its as well its locations. Thus an algorithm that performs well", "start_char_pos": 526, "end_char_pos": 628}, {"type": "A", "before": null, "after": "for evacuees", "start_char_pos": 763, "end_char_pos": 763}, {"type": "R", "before": "configuration", "after": "choice", "start_char_pos": 798, "end_char_pos": 811}, {"type": "R", "before": "routing algorithm to realise near-optimal evacuations", "after": "evacuee routing algorithm that optimises evacuation", "start_char_pos": 885, "end_char_pos": 938}, {"type": "R", "before": "routing algorithm directly", "after": "predetermined routing algorithm", "start_char_pos": 1039, "end_char_pos": 1065}, {"type": "R", "before": ". Towards all the perished evacuees in the simulation,", "after": ", and any \"simulated casualties\" are then re-routed using", "start_char_pos": 1199, "end_char_pos": 1253}, {"type": "R", "before": "is then employed to solve optimal", "after": "to obtain new safe", "start_char_pos": 1288, "end_char_pos": 1321}, {"type": "D", "before": "and all the evacuees will finally follow the desired paths", "after": null, "start_char_pos": 1337, "end_char_pos": 1395}, {"type": "R", "before": "Furthermore, the \"tsunami of data\" phenomenon caused by simultaneous information exchanges among massive mobile devices and cloud servers is avoided as the proposed algorithm can calculate desired paths only based on the initial situation", "after": "This approach can be iterated as long as corrective action is still possible", "start_char_pos": 1406, "end_char_pos": 1644}], "sents_char_pos": [0, 216, 380, 585, 689, 834, 1002, 1200, 1405]} {"doc_id": "1412.7695", "revision_depth": "1", "before_revision": "Allosteric communication in proteins is a central and yet not solved problem of structural biochemistry. Molecular communication in proteins requires the existence of allosteric pathways. Previous findings, from computational biology (Ota and Agard, 2005), has proposed that heat diffuses in a protein through allosteric pathways. In this work we studied heat diffusion in the well know PDZ-2 protein . This protein has two cognate allosteric pathways and we confirm that heat flows preferentially through them . Also, a new property is observed for protein structures : heat diffuses asymmetrically through them . The underling structure of this asymmetrical heat flow is the hydrogen bond of normal length (~2.85 {\\AA}) , that can act as a thermal diode. Also asymmetrical heat diffusion is due, in } a higher scale, to local URLanization of residues . This asymmetrical energy flow may be relevant for allosteric signal communication directionality in protein structures .", "after_revision": "Allosteric communication in proteins is a central and yet unsolved problem of structural biochemistry. Previous findings, from computational biology (Ota and Agard, 2005), have proposed that heat diffuses in a protein through cognate protein allosteric pathways. This work studied heat diffusion in the well-known PDZ-2 protein , and confirmed that this protein has two cognate allosteric pathways and that heat flows preferentially through these . Also, a new property was also observed for protein structures - heat diffuses asymmetrically through the structures . The underling structure of this asymmetrical heat flow was a normal length hydrogen bond (~2.85 {\\AA}) that acted as a thermal rectifier. In contrast, thermal rectification was compromised in short hydrogen bonds (~2.60 \\AA}), giving rise to symmetrical thermal diffusion. Asymmetrical heat diffusion was due, on a higher scale, to the local, URLanization of residues that, in turn, was also mediated by hydrogen bonds . This asymmetrical /symmetrical energy flow may be relevant for allosteric signal communication directionality in proteins and for the control of heat flow in materials science .", "edit_actions": [{"type": "R", "before": "not solved", "after": "unsolved", "start_char_pos": 58, "end_char_pos": 68}, {"type": "D", "before": "Molecular communication in proteins requires the existence of allosteric pathways.", "after": null, "start_char_pos": 105, "end_char_pos": 187}, {"type": "R", "before": "has", "after": "have", "start_char_pos": 257, "end_char_pos": 260}, {"type": "A", "before": null, "after": "cognate protein", "start_char_pos": 310, "end_char_pos": 310}, {"type": "R", "before": "In this work we", "after": "This work", "start_char_pos": 332, "end_char_pos": 347}, {"type": "R", "before": "well know", "after": "well-known", "start_char_pos": 378, "end_char_pos": 387}, {"type": "R", "before": ". This", "after": ", and confirmed that this", "start_char_pos": 402, "end_char_pos": 408}, {"type": "D", "before": "we confirm", "after": null, "start_char_pos": 457, "end_char_pos": 467}, {"type": "R", "before": "them", "after": "these", "start_char_pos": 507, "end_char_pos": 511}, {"type": "R", "before": "is", "after": "was also", "start_char_pos": 535, "end_char_pos": 537}, {"type": "R", "before": ":", "after": "-", "start_char_pos": 570, "end_char_pos": 571}, {"type": "R", "before": "them", "after": "the structures", "start_char_pos": 609, "end_char_pos": 613}, {"type": "R", "before": "is the hydrogen bond of normal length", "after": "was a normal length hydrogen bond", "start_char_pos": 671, "end_char_pos": 708}, {"type": "R", "before": ", that can act", "after": "that acted", "start_char_pos": 723, "end_char_pos": 737}, {"type": "R", "before": "diode. Also asymmetrical heat diffusion is due, in", "after": "rectifier. In contrast, thermal rectification was compromised in short hydrogen bonds (~2.60", "start_char_pos": 751, "end_char_pos": 801}, {"type": "A", "before": null, "after": "\\AA", "start_char_pos": 802, "end_char_pos": 802}, {"type": "A", "before": null, "after": "), giving rise to symmetrical thermal diffusion. Asymmetrical heat diffusion was due, on", "start_char_pos": 803, "end_char_pos": 803}, {"type": "R", "before": "local", "after": "the local,", "start_char_pos": 823, "end_char_pos": 828}, {"type": "A", "before": null, "after": "that, in turn, was also mediated by hydrogen bonds", "start_char_pos": 854, "end_char_pos": 854}, {"type": "A", "before": null, "after": "/symmetrical", "start_char_pos": 875, "end_char_pos": 875}, {"type": "R", "before": "protein structures", "after": "proteins and for the control of heat flow in materials science", "start_char_pos": 958, "end_char_pos": 976}], "sents_char_pos": [0, 104, 187, 331, 403, 513, 615, 757, 856]} {"doc_id": "1502.05603", "revision_depth": "1", "before_revision": "This paper examines the stock market comovements using basically three different approaches. Firstly, we used the most common linear analysis, based on cointegrationand Granger causality tests ; secondly we applied a nonlinear approach, using mutual information to analyze nonlinear dependence. Since underlying data sets are affected by non-stationarities , we also applied MF-DFA and MF-DXA in order to examine the multifractality nature of data and to analyze the relationship and mutual interaction between pairs of series, respectively. The overall results are quite interesting, since we found only 170 pair of stock markets cointegrated, and according to the Granger causality and mutual information we realized that the strongest relations lies between emerging markets, and between emerging and frontier markets. According to scaling exponent given by MF-DFA , h(q=2)>1, we found that all underlying data belong to non-stationary process. There is no cross-over in the fluctuation functions determined by MF-DFA method confirmed that mentioned approach could remove trends embedded in the data sets\\pm0\\pm . The nature of cross-correlation exponent based on Mf-DXA is almost multifractal for all stock market pairs . The empirical relation, h_{xy}\\le [ h_{xx + h_{yy ]/2 was confirmedjust for q>0 , while for q<0 there was a deviation from this relation . Width of singularity spectrum is in the range \\Delta \\alpha_{xx}\\in [0.304,0.905] which is another confirmation about multifractality nature of underlying data sets. The singularity spectrum for cross-correlation is in the range \\Delta \\alpha_{xy}\\in [0.246,1.178] confirming more complex relation between stock markets . The value of \\sigma_{DCCA} which is a measure for quantifying degree of cross-correlation indicates that all stock market pairs in the underlying time interval belong to cross-correlated series .", "after_revision": "Stock market comovements are examined using cointegration, Granger causality tests and nonlinear approaches in context of mutual information and correlations. Underlying data sets are affected by non-stationarities and trends , we also apply AMF-DFA and AMF-DXA. We find only 170 pair of Stock markets cointegrated, and according to the Granger causality and mutual information , we realize that the strongest relations lies between emerging markets, and between emerging and frontier markets. According to scaling exponent given by AMF-DFA , h(q=2)>1, we find that all underlying data sets belong to non-stationary process. According to EMH, only 8 markets are classified in uncorrelated processes at 2\\sigma confidence interval. 6 Stock markets belong to anti-correlated class and dominant part of markets has memory in corresponding daily index prices during January 1995 to February 2014. New-Zealand with H=0.457\\pm0.004 and Jordan with H=0.602\\pm 0.006 are far from EMH . The nature of cross-correlation exponents based on AMF-DXA is almost multifractal for all pair of Stock markets . The empirical relation, H_{xy}\\le [ H_{xx + H_{yy ]/2 , is confirmed. Mentioned relation for q>0 is also satisfied while for q<0 there is a deviation from this relation confirming behavior of markets for small fluctuations is affected by contribution of major pair. For larger fluctuations, the cross-correlation contains information from both local and global conditions. Width of singularity spectrum for auto-correlation and cross-correlation are \\Delta \\alpha_{xx}\\in [0.304,0.905] and \\Delta \\alpha_{xy}\\in [0.246,1.178] , respectively. The wide range of singularity spectrum for cross-correlation confirms that the bilateral relation between Stock markets is more complex . The value of \\sigma_{DCCA} indicates that all pairs of stock market studied in this time interval belong to cross-correlated processes .", "edit_actions": [{"type": "R", "before": "This paper examines the stock market comovements using basically three different approaches. Firstly, we used the most common linear analysis, based on cointegrationand", "after": "Stock market comovements are examined using cointegration,", "start_char_pos": 0, "end_char_pos": 168}, {"type": "R", "before": "; secondly we applied a nonlinear approach, using mutual information to analyze nonlinear dependence. Since underlying", "after": "and nonlinear approaches in context of mutual information and correlations. Underlying", "start_char_pos": 193, "end_char_pos": 311}, {"type": "A", "before": null, "after": "and trends", "start_char_pos": 357, "end_char_pos": 357}, {"type": "R", "before": "applied MF-DFA and MF-DXA in order to examine the multifractality nature of data and to analyze the relationship and mutual interaction between pairs of series, respectively. The overall results are quite interesting, since we found", "after": "apply AMF-DFA and AMF-DXA. We find", "start_char_pos": 368, "end_char_pos": 600}, {"type": "R", "before": "stock", "after": "Stock", "start_char_pos": 618, "end_char_pos": 623}, {"type": "R", "before": "we realized", "after": ", we realize", "start_char_pos": 708, "end_char_pos": 719}, {"type": "R", "before": "MF-DFA", "after": "AMF-DFA", "start_char_pos": 862, "end_char_pos": 868}, {"type": "R", "before": "found", "after": "find", "start_char_pos": 884, "end_char_pos": 889}, {"type": "A", "before": null, "after": "sets", "start_char_pos": 915, "end_char_pos": 915}, {"type": "R", "before": "There is no cross-over in the fluctuation functions determined by MF-DFA method confirmed that mentioned approach could remove trends embedded in the data sets", "after": "According to EMH, only 8 markets are classified in uncorrelated processes at 2\\sigma confidence interval. 6 Stock markets belong to anti-correlated class and dominant part of markets has memory in corresponding daily index prices during January 1995 to February 2014. New-Zealand with H=0.457", "start_char_pos": 950, "end_char_pos": 1109}, {"type": "A", "before": null, "after": ".004 and Jordan with H=0.602", "start_char_pos": 1113, "end_char_pos": 1113}, {"type": "A", "before": null, "after": "0.006 are far from EMH", "start_char_pos": 1117, "end_char_pos": 1117}, {"type": "R", "before": "exponent based on Mf-DXA", "after": "exponents based on AMF-DXA", "start_char_pos": 1152, "end_char_pos": 1176}, {"type": "R", "before": "stock market pairs", "after": "pair of Stock markets", "start_char_pos": 1208, "end_char_pos": 1226}, {"type": "R", "before": "h_{xy", "after": "H_{xy", "start_char_pos": 1253, "end_char_pos": 1258}, {"type": "R", "before": "h_{xx", "after": "H_{xx", "start_char_pos": 1265, "end_char_pos": 1270}, {"type": "R", "before": "h_{yy", "after": "H_{yy", "start_char_pos": 1273, "end_char_pos": 1278}, {"type": "R", "before": "was confirmedjust", "after": ", is confirmed. Mentioned relation", "start_char_pos": 1283, "end_char_pos": 1300}, {"type": "R", "before": ",", "after": "is also satisfied", "start_char_pos": 1309, "end_char_pos": 1310}, {"type": "R", "before": "was", "after": "is", "start_char_pos": 1331, "end_char_pos": 1334}, {"type": "R", "before": ".", "after": "confirming behavior of markets for small fluctuations is affected by contribution of major pair. For larger fluctuations, the cross-correlation contains information from both local and global conditions.", "start_char_pos": 1366, "end_char_pos": 1367}, {"type": "R", "before": "is in the range", "after": "for auto-correlation and cross-correlation are", "start_char_pos": 1398, "end_char_pos": 1413}, {"type": "R", "before": "which is another confirmation about multifractality nature of underlying data sets. The singularity spectrum for cross-correlation is in the range", "after": "and", "start_char_pos": 1450, "end_char_pos": 1596}, {"type": "R", "before": "confirming more complex relation between stock markets", "after": ", respectively. The wide range of singularity spectrum for cross-correlation confirms that the bilateral relation between Stock markets is more complex", "start_char_pos": 1633, "end_char_pos": 1687}, {"type": "D", "before": "which is a measure for quantifying degree of cross-correlation", "after": null, "start_char_pos": 1717, "end_char_pos": 1779}, {"type": "R", "before": "stock market pairs in the underlying", "after": "pairs of stock market studied in this", "start_char_pos": 1799, "end_char_pos": 1835}, {"type": "R", "before": "series", "after": "processes", "start_char_pos": 1877, "end_char_pos": 1883}], "sents_char_pos": [0, 92, 194, 294, 542, 822, 949, 1119, 1228, 1367, 1533, 1689]} {"doc_id": "1502.07110", "revision_depth": "1", "before_revision": "For a class of polynomial kinetic systems, this work examines connections between system parameters, and uniqueness and stability of the resulting equilibria. Such systems are typically employed to describe nonlinear dynamics in chemical reaction networks, but over the last years they proved useful in modeling a wide range of nonlinear dynamic systems with applications in biology, process systems, economics or transportation problems. In particular , a canonical representation of the set of all possible feasible equilibrium solutions is developed. The characterization is made in terms of compartmental matrices which by construction are strictly stable and define the so-called family of solutions. Feasibility is imposed by a set of constraints, which are linear in the log-transformed space of complexes, and relate to the kernel of the stoichiometric subspace. One particularly interesting representation of these constraints can be established in terms of a class of monotonous functions which turn out to be critical to conclude uniqueness of equilibrium points in a class of deficiency one networks. One main consequence of such representation is the possibility of a simple constructive proof of the deficiency one theorem. It also allows a precise characterization of the parameter space region of complex balance solutions we refer to as the Horn set. Future directions may involve detection or design of networks having multiple equilibria, or the use of complex balance condition to provide stabilization via feed-back control of open reaction systems .", "after_revision": "This paper studies the relations among system parameters, uniqueness, and stability of equilibria, for kinetic systems given in the form of polynomial ODEs. Such models are commonly used to describe the dynamics of nonnegative systems, with a wide range of application fields such as chemistry, systems biology, process modeling or even transportation systems. Using a flux-based description of kinetic models , a canonical representation of the set of all possible feasible equilibria is developed. The characterization is made in terms of strictly stable compartmental matrices to define the so-called family of solutions. Feasibility is imposed by a set of constraints, which are linear on a log-transformed space of complexes, and relate to the kernel of a matrix, the columns of which span the stoichiometric subspace. One particularly interesting representation of these constraints can be expressed in terms of a class of monotonous decreasing functions. This allows connections to be established with classical results in CRNT that relate to the existence and uniqueness of equilibria along positive stoichiometric compatibility classes. In particular, monotonicity can be employed to identify regions in the set of possible reaction rate coefficients leading to complex balancing, and to conclude uniqueness of equilibria for a class of positive deficiency networks. The latter result might support constructing an alternative proof of the well-known deficiency one theorem. The developed notions and results are illustrated through examples .", "edit_actions": [{"type": "R", "before": "For a class of polynomial kinetic systems, this work examines connections between", "after": "This paper studies the relations among", "start_char_pos": 0, "end_char_pos": 81}, {"type": "R", "before": "and uniqueness", "after": "uniqueness,", "start_char_pos": 101, "end_char_pos": 115}, {"type": "R", "before": "the resulting equilibria. Such systems are typically employed to describe nonlinear dynamics in chemical reaction networks, but over the last years they proved useful in modeling", "after": "equilibria, for kinetic systems given in the form of polynomial ODEs. Such models are commonly used to describe the dynamics of nonnegative systems, with", "start_char_pos": 133, "end_char_pos": 311}, {"type": "R", "before": "nonlinear dynamic systems with applications in", "after": "application fields such as chemistry, systems", "start_char_pos": 328, "end_char_pos": 374}, {"type": "R", "before": "systems, economics or transportation problems. In particular", "after": "modeling or even transportation systems. Using a flux-based description of kinetic models", "start_char_pos": 392, "end_char_pos": 452}, {"type": "R", "before": "equilibrium solutions", "after": "equilibria", "start_char_pos": 518, "end_char_pos": 539}, {"type": "R", "before": "compartmental matrices which by construction are strictly stable and", "after": "strictly stable compartmental matrices to", "start_char_pos": 595, "end_char_pos": 663}, {"type": "R", "before": "in the", "after": "on a", "start_char_pos": 771, "end_char_pos": 777}, {"type": "R", "before": "the", "after": "a matrix, the columns of which span the", "start_char_pos": 842, "end_char_pos": 845}, {"type": "R", "before": "established", "after": "expressed", "start_char_pos": 943, "end_char_pos": 954}, {"type": "R", "before": "functions which turn out to be critical to conclude uniqueness of equilibrium points in", "after": "decreasing functions. This allows connections to be established with classical results in CRNT that relate to the existence and uniqueness of equilibria along positive stoichiometric compatibility classes. In particular, monotonicity can be employed to identify regions in the set of possible reaction rate coefficients leading to complex balancing, and to conclude uniqueness of equilibria for", "start_char_pos": 989, "end_char_pos": 1076}, {"type": "R", "before": "deficiency one networks. One main consequence of such representation is the possibility of a simple constructive", "after": "positive deficiency networks. The latter result might support constructing an alternative", "start_char_pos": 1088, "end_char_pos": 1200}, {"type": "A", "before": null, "after": "well-known", "start_char_pos": 1214, "end_char_pos": 1214}, {"type": "R", "before": "It also allows a precise characterization of the parameter space region of complex balance solutions we refer to as the Horn set. Future directions may involve detection or design of networks having multiple equilibria, or the use of complex balance condition to provide stabilization via feed-back control of open reaction systems", "after": "The developed notions and results are illustrated through examples", "start_char_pos": 1239, "end_char_pos": 1570}], "sents_char_pos": [0, 158, 438, 553, 705, 870, 1112, 1238, 1368]} {"doc_id": "1503.06529", "revision_depth": "1", "before_revision": "Molecular motors and cytoskeletal filaments mostly work collectively under opposing forces. This opposing force may be due to cargo carried by motors , or resistance coming from cell membrane pressing against the cytoskeletal filaments. Certain recent studies have shown that the collective maximum force (stall force) generated by multiple cytoskeletal filaments or molecular motors may not always be just a simple sum of stall force for individual filaments or motors. To understand this phenomena of excess or deficit collective force generation , we study a broad class of models of both cytoskeletal filaments and molecular motors. We argue that the stall force generated by a group of filaments or motors is additive, i.e. , the stall force of N filaments (motors) is N times the stall force of one filament (motor), when the system is in equilibrium at stall. Consequently , we show that this additivity typically does not hold when the system departs from equilibrium at stall. We thus present a novel and unified understanding of existing models exhibiting such non- addivity, and generalize our arguments by developing new models that demonstrate this phenomena. We also propose a quantity similar to thermodynamic efficiency to provide a simple understanding of deviation from stall-force additivity for filament and motor collectives.", "after_revision": "Molecular motors and cytoskeletal filaments work collectively most of the time under opposing forces. This opposing force may be due to cargo carried by motors or resistance coming from the cell membrane pressing against the cytoskeletal filaments. Some recent studies have shown that the collective maximum force (stall force) generated by multiple cytoskeletal filaments or molecular motors may not always be just a simple sum of the stall forces of the individual filaments or motors. To understand this excess or deficit in the collective force , we study a broad class of models of both cytoskeletal filaments and molecular motors. We argue that the stall force generated by a group of filaments or motors is additive, that is , the stall force of N number of filaments (motors) is N times the stall force of one filament (motor), when the system is in equilibrium at stall. Conversely , we show that this additive property typically does not hold true when the system is not at equilibrium at stall. We thus present a novel and unified understanding of the existing models exhibiting such non-addivity, and generalise our arguments by developing new models that demonstrate this phenomena. We also propose a quantity similar to thermodynamic efficiency to easily predict this deviation from stall-force additivity for filament and motor collectives.", "edit_actions": [{"type": "R", "before": "mostly work collectively", "after": "work collectively most of the time", "start_char_pos": 44, "end_char_pos": 68}, {"type": "D", "before": ",", "after": null, "start_char_pos": 150, "end_char_pos": 151}, {"type": "A", "before": null, "after": "the", "start_char_pos": 178, "end_char_pos": 178}, {"type": "R", "before": "Certain", "after": "Some", "start_char_pos": 238, "end_char_pos": 245}, {"type": "R", "before": "stall force for", "after": "the stall forces of the", "start_char_pos": 424, "end_char_pos": 439}, {"type": "D", "before": "phenomena of", "after": null, "start_char_pos": 491, "end_char_pos": 503}, {"type": "R", "before": "collective force generation", "after": "in the collective force", "start_char_pos": 522, "end_char_pos": 549}, {"type": "R", "before": "i.e.", "after": "that is", "start_char_pos": 725, "end_char_pos": 729}, {"type": "A", "before": null, "after": "number of", "start_char_pos": 753, "end_char_pos": 753}, {"type": "R", "before": "Consequently", "after": "Conversely", "start_char_pos": 869, "end_char_pos": 881}, {"type": "R", "before": "additivity", "after": "additive property", "start_char_pos": 902, "end_char_pos": 912}, {"type": "A", "before": null, "after": "true", "start_char_pos": 937, "end_char_pos": 937}, {"type": "R", "before": "departs from", "after": "is not at", "start_char_pos": 954, "end_char_pos": 966}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1042, "end_char_pos": 1042}, {"type": "R", "before": "non- addivity, and generalize", "after": "non-addivity, and generalise", "start_char_pos": 1075, "end_char_pos": 1104}, {"type": "R", "before": "provide a simple understanding of", "after": "easily predict this", "start_char_pos": 1243, "end_char_pos": 1276}], "sents_char_pos": [0, 91, 237, 471, 637, 868, 988, 1176]} {"doc_id": "1505.07062", "revision_depth": "1", "before_revision": "Coverage planning and optimization is one of the most crucial tasks for a radio network operator. Efficient coverage optimization requires accurate coverage estimation which relies on geo-located field measurements . These measurements are gathered today during highly expensive drive tests and will be reported in the near future by users equipments thanks to the 3GPP MDT feature ( still costly in terms of battery consumption and signaling overhead ). In both cases , predicting the coverage on a location where no measurements are available remains a key and challenging task. This paper describes a powerful tool that gives an accurate coverage prediction on the whole area of interest , i.e. a coverage map , by spatially interpolating geo-located measurements using Kriging technique. The paper focuses on the reduction of the computational complexity of the kriging algorithm by applying Fixed Rank Kriging (FRK). The performance evaluation of the FRK algorithm both on simulated measurements and real field measurements shows a good trade-off between prediction efficiency and computational complexity. In order to go a step further towards operational application of the proposed algorithm, a scenario with multiple cells is studied. Simulation results show a good performance in terms of coverage prediction and detection of best serving cell.", "after_revision": "Coverage planning and optimization is one of the most crucial tasks for a radio network operator. Efficient coverage optimization requires accurate coverage estimation . This estimation relies on geo-located field measurements which are gathered today during highly expensive drive tests (DT); and will be reported in the near future by users ' mobile devices thanks to the 3GPP Minimizing Drive Tests (MDT) feature~\\mbox{%DIFAUXCMD 3GPPproposal still costly in terms of battery consumption and signaling overhead . Therefore , predicting the coverage on a location where no measurements are available remains a key and challenging task. This paper describes a powerful tool that gives an accurate coverage prediction on the whole area of interest : it builds a coverage map by spatially interpolating geo-located measurements using the Kriging technique. The paper focuses on the reduction of the computational complexity of the Kriging algorithm by applying Fixed Rank Kriging (FRK). The performance evaluation of the FRK algorithm both on simulated measurements and real field measurements shows a good trade-off between prediction efficiency and computational complexity. In order to go a step further towards the operational application of the proposed algorithm, a multicellular use-case is studied. Simulation results show a good performance in terms of coverage prediction and detection of the best serving cell.", "edit_actions": [{"type": "R", "before": "which", "after": ". This estimation", "start_char_pos": 168, "end_char_pos": 173}, {"type": "R", "before": ". These measurements", "after": "which", "start_char_pos": 215, "end_char_pos": 235}, {"type": "A", "before": null, "after": "(DT);", "start_char_pos": 291, "end_char_pos": 291}, {"type": "R", "before": "equipments", "after": "' mobile devices", "start_char_pos": 341, "end_char_pos": 351}, {"type": "R", "before": "MDT feature (", "after": "Minimizing Drive Tests (MDT) feature~\\mbox{%DIFAUXCMD 3GPPproposal", "start_char_pos": 371, "end_char_pos": 384}, {"type": "R", "before": "). In both cases", "after": ". Therefore", "start_char_pos": 453, "end_char_pos": 469}, {"type": "R", "before": ", i.e.", "after": ": it builds", "start_char_pos": 692, "end_char_pos": 698}, {"type": "D", "before": ",", "after": null, "start_char_pos": 714, "end_char_pos": 715}, {"type": "A", "before": null, "after": "the", "start_char_pos": 774, "end_char_pos": 774}, {"type": "R", "before": "kriging", "after": "Kriging", "start_char_pos": 868, "end_char_pos": 875}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1152, "end_char_pos": 1152}, {"type": "R", "before": "scenario with multiple cells", "after": "multicellular use-case", "start_char_pos": 1206, "end_char_pos": 1234}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1339, "end_char_pos": 1339}], "sents_char_pos": [0, 97, 216, 455, 581, 793, 923, 1113, 1246]} {"doc_id": "1508.03533", "revision_depth": "1", "before_revision": " In the present paper we employ the theoretical tools developed in network theory , in order to shed light on the response of world wide trade to the financial crisis of 2007. In particular, we have explored the evolution of the bipartite country-product World Trade Web across the years 1995-2010, monitoring the behaviour of the system both before and after 2007. Remarkably, our results indicate that, from 2003 on, the abundances of a recently-defined class of bipartite motifs assume values progressively closer to the ones predicted by a null model which preserves only basic features of the observed structure, completely randomizing the rest. In other words, as 2007 approaches the World Trade Web becomes more and more compatible with the picture of a bipartite network where correlations between countries and products are progressively lost. Moreover, the trends characterizing the z-scores of the considered family of motifs suggest that the most evident modification in the structure of the world trade network can be considered as concluded in 2010, after a seemingly stationary phase of three years. In the second part of the paper, we have refined our analysis by considering subsets of nodes regarded in the literature as sharing similar economic traits: while the evolution of certain subgroups of countries and products confirms the trends highlighted by the global motifs, other groupings show a behavior compatible with our null model throughout the whole period 1995-2010, thus questioning the economic relevance traditionally assigned to these groups .", "after_revision": "Since 2007, several contributions have tried to identify early-warning signals of the financial crisis. However, the vast majority of analyses has focused, so far, on financial systems and little theoretical work has been done on the economic counterpart. In the present paper we fill this gap and employ the theoretical tools of network theory to shed light on the response of world trade to the financial crisis of 2007 and the economic recession of 2008-2009. We have explored the evolution of the bipartite World Trade Web (WTW) across the years 1995-2010, monitoring the behaviour of the system both before and after 2007. Remarkably, our results point out the presence of early structural changes in the WTW topology: from 2003 on, the WTW becomes more and more compatible with the picture of a network where correlations between countries and products are progressively lost. Moreover, the most evident modification in the structure of the world trade network can be considered as concluded in 2010, after a seemingly stationary phase of three years. We have also refined our analysis by considering specific subsets of countries and products : according to our analysis, the most statistically significant early-warning signals are provided by the most volatile macrosectors, especially when measured on emerging economies, suggesting the latter as the most sensitive indicators of the WTW health .", "edit_actions": [{"type": "A", "before": null, "after": "Since 2007, several contributions have tried to identify early-warning signals of the financial crisis. However, the vast majority of analyses has focused, so far, on financial systems and little theoretical work has been done on the economic counterpart.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "A", "before": null, "after": "fill this gap and", "start_char_pos": 25, "end_char_pos": 25}, {"type": "R", "before": "developed in network theory , in order", "after": "of network theory", "start_char_pos": 55, "end_char_pos": 93}, {"type": "D", "before": "wide", "after": null, "start_char_pos": 133, "end_char_pos": 137}, {"type": "R", "before": "2007. In particular, we", "after": "2007 and the economic recession of 2008-2009. We", "start_char_pos": 171, "end_char_pos": 194}, {"type": "D", "before": "country-product", "after": null, "start_char_pos": 240, "end_char_pos": 255}, {"type": "A", "before": null, "after": "(WTW)", "start_char_pos": 272, "end_char_pos": 272}, {"type": "R", "before": "indicate that,", "after": "point out the presence of early structural changes in the WTW topology:", "start_char_pos": 392, "end_char_pos": 406}, {"type": "R", "before": "abundances of a recently-defined class of bipartite motifs assume values progressively closer to the ones predicted by a null model which preserves only basic features of the observed structure, completely randomizing the rest. In other words, as 2007 approaches the World Trade Web", "after": "WTW", "start_char_pos": 425, "end_char_pos": 707}, {"type": "D", "before": "bipartite", "after": null, "start_char_pos": 763, "end_char_pos": 772}, {"type": "D", "before": "trends characterizing the z-scores of the considered family of motifs suggest that the", "after": null, "start_char_pos": 869, "end_char_pos": 955}, {"type": "R", "before": "In the second part of the paper, we have", "after": "We have also", "start_char_pos": 1117, "end_char_pos": 1157}, {"type": "R", "before": "subsets of nodes regarded in the literature as sharing similar economic traits: while the evolution of certain subgroups of", "after": "specific subsets of", "start_char_pos": 1194, "end_char_pos": 1317}, {"type": "R", "before": "confirms the trends highlighted by the global motifs, other groupings show a behavior compatible with our null model throughout the whole period 1995-2010, thus questioning the economic relevance traditionally assigned to these groups", "after": ": according to our analysis, the most statistically significant early-warning signals are provided by the most volatile macrosectors, especially when measured on emerging economies, suggesting the latter as the most sensitive indicators of the WTW health", "start_char_pos": 1341, "end_char_pos": 1575}], "sents_char_pos": [0, 176, 367, 652, 854, 1116]} {"doc_id": "1508.06797", "revision_depth": "1", "before_revision": "We perform a classification of the Lie point symmetries for the Black-Scholes-Merton model for European options with stochastic volatility \\% \\sigma, in which the last is defined by a stochastic differential equation with the Orstein-Uhlenbeck term. In this model the value of the option is given by a linear (1 + 2) evolution partial differential equation , in which the price of the option depends on two independent variables, the value of the underlying asset S and a new variable, y , which follow from the Orstein-Uhlenbeck process . We find that for arbitrary functional form of the volatility, \\sigma(y), the (1 + 2) evolution equation admits always two Lie symmetries, plus the linear symmetry and the infinity number of solution symmetries. However when \\sigma(y)=\\sigma_{0} and since the price of the option depends on the second Brownian motion in which the volatility is defined, the (1 + 2) evolution is not reduced to the Black-Scholes-Merton equation , the model admits five Lie symmetries, plus the linear symmetry and the infinity number of solution symmetries. Furthermore we apply the zero-order invariants of the Lie symmetries and we reduce the (1 + 2) evolution equation to a linear second-order ordinary differential equation. Finally we study two models of special interest, the Heston model and the Stein-Stein model.", "after_revision": "We perform a classification of the Lie point symmetries for the Black--Scholes--Merton Model for European options with stochastic volatility , \\sigma, in which the last is defined by a stochastic differential equation with an Orstein--Uhlenbeck term. In this model , the value of the option is given by a linear (1 + 2) evolution partial differential equation in which the price of the option depends upon two independent variables, the value of the underlying asset , S, and a new variable, y . We find that for arbitrary functional form of the volatility, \\sigma(y), the (1 + 2) evolution equation always admits two Lie point symmetries in addition to the automatic linear symmetry and the infinite number of solution symmetries. However , when \\sigma(y)=\\sigma_{0} and as the price of the option depends upon the second Brownian motion in which the volatility is defined, the (1 + 2) evolution is not reduced to the Black--Scholes--Merton Equation , the model admits five Lie point symmetries in addition to the linear symmetry and the infinite number of solution symmetries. We apply the zeroth-order invariants of the Lie symmetries and we reduce the (1 + 2) evolution equation to a linear second-order ordinary differential equation. Finally , we study two models of special interest, the Heston model and the Stein--Stein model.", "edit_actions": [{"type": "R", "before": "Black-Scholes-Merton model", "after": "Black--Scholes--Merton Model", "start_char_pos": 64, "end_char_pos": 90}, {"type": "R", "before": "\\%", "after": ",", "start_char_pos": 139, "end_char_pos": 141}, {"type": "R", "before": "the Orstein-Uhlenbeck", "after": "an Orstein--Uhlenbeck", "start_char_pos": 222, "end_char_pos": 243}, {"type": "A", "before": null, "after": ",", "start_char_pos": 264, "end_char_pos": 264}, {"type": "D", "before": ",", "after": null, "start_char_pos": 358, "end_char_pos": 359}, {"type": "R", "before": "on", "after": "upon", "start_char_pos": 401, "end_char_pos": 403}, {"type": "R", "before": "S", "after": ", S,", "start_char_pos": 465, "end_char_pos": 466}, {"type": "D", "before": ", which follow from the Orstein-Uhlenbeck process", "after": null, "start_char_pos": 489, "end_char_pos": 538}, {"type": "R", "before": "admits always two Lie symmetries, plus the", "after": "always admits two Lie point symmetries in addition to the automatic", "start_char_pos": 645, "end_char_pos": 687}, {"type": "R", "before": "infinity", "after": "infinite", "start_char_pos": 712, "end_char_pos": 720}, {"type": "A", "before": null, "after": ",", "start_char_pos": 760, "end_char_pos": 760}, {"type": "R", "before": "since", "after": "as", "start_char_pos": 791, "end_char_pos": 796}, {"type": "R", "before": "on", "after": "upon", "start_char_pos": 829, "end_char_pos": 831}, {"type": "R", "before": "Black-Scholes-Merton equation", "after": "Black--Scholes--Merton Equation", "start_char_pos": 939, "end_char_pos": 968}, {"type": "R", "before": "symmetries, plus", "after": "point symmetries in addition to", "start_char_pos": 997, "end_char_pos": 1013}, {"type": "R", "before": "infinity", "after": "infinite", "start_char_pos": 1042, "end_char_pos": 1050}, {"type": "R", "before": "Furthermore we apply the zero-order", "after": "We apply the zeroth-order", "start_char_pos": 1082, "end_char_pos": 1117}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1261, "end_char_pos": 1261}, {"type": "R", "before": "Stein-Stein", "after": "Stein--Stein", "start_char_pos": 1328, "end_char_pos": 1339}], "sents_char_pos": [0, 249, 540, 751, 1081, 1252]} {"doc_id": "1509.08409", "revision_depth": "1", "before_revision": "The study of network structure has URLanizational principles in complex systems. However, there is also a need to understand how to control them; for example, to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex multivariate systems can be predicted solely from the graph of interactions between variables, without considering variable dynamics: structural controllability and minimum dominating sets. Both methodologies utilize idealized assumptions about multivariate dynamics, yet most accurate models of real-world systems do not abide by these assumptions. Here, we study the relationship between network structure and the control of multivariate dynamics using three distinct measures of controllability in Boolean Networks. We demonstrate that structure-only methods fail to properly characterize controllability in these nonlinear systems; even in very simple networks, a large variation of possible dynamics can occur for the same structure, each with different control profiles. Our methodology is also used to characterize critical control variables in three models of biochemical regulation: the Drosophila melanogaster single-cell segment polarity network , the eukaryotic cell cycle of budding yeast Saccharomyces cerevisiae, and the URLan arrangement in Arabidopsis thaliana. Structure-only methods both undershoot and overshoot the number and which sets of variables actually control these models, highlighting the importance of the system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays a role in the extent to which structure predicts dynamics.", "after_revision": "The study of network structure has uncovered signatures of URLanization of complex systems. However, there is also a need to understand how to control them; for example, identifying strategies to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex systems can be predicted solely from the graph of interactions between variables, without considering their dynamics: structural controllability and minimum dominating sets. We demonstrate that such structure-only methods fail to characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as three models of biochemical regulation: the segment polarity network in Drosophila melanogaster, the cell cycle of budding yeast Saccharomyces cerevisiae, and the URLan arrangement in Arabidopsis thaliana. We demonstrate that structure-only methods both undershoot and overshoot the number and which sets of critical variables best control the dynamics of these models, highlighting the importance of the actual system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays an important role in the extent to which structure predicts dynamics.", "edit_actions": [{"type": "R", "before": "URLanizational principles in", "after": "uncovered signatures of URLanization of", "start_char_pos": 35, "end_char_pos": 63}, {"type": "A", "before": null, "after": "identifying strategies", "start_char_pos": 159, "end_char_pos": 159}, {"type": "D", "before": "multivariate", "after": null, "start_char_pos": 316, "end_char_pos": 328}, {"type": "R", "before": "variable", "after": "their", "start_char_pos": 431, "end_char_pos": 439}, {"type": "D", "before": "Both methodologies utilize idealized assumptions about multivariate dynamics, yet most accurate models of real-world systems do not abide by these assumptions. Here, we study the relationship between network structure and the control of multivariate dynamics using three distinct measures of controllability in Boolean Networks.", "after": null, "start_char_pos": 506, "end_char_pos": 834}, {"type": "A", "before": null, "after": "such", "start_char_pos": 855, "end_char_pos": 855}, {"type": "R", "before": "properly characterize controllability in these nonlinear systems; even in very simple networks, a large variation of possible dynamics can occur for the same structure, each with different control profiles. Our methodology is also used to characterize critical control variables in", "after": "characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as", "start_char_pos": 887, "end_char_pos": 1168}, {"type": "D", "before": "Drosophila melanogaster single-cell", "after": null, "start_char_pos": 1213, "end_char_pos": 1248}, {"type": "R", "before": ", the eukaryotic", "after": "in Drosophila melanogaster, the", "start_char_pos": 1274, "end_char_pos": 1290}, {"type": "R", "before": "Structure-only", "after": "We demonstrate that structure-only", "start_char_pos": 1396, "end_char_pos": 1410}, {"type": "R", "before": "variables actually control", "after": "critical variables best control the dynamics of", "start_char_pos": 1478, "end_char_pos": 1504}, {"type": "A", "before": null, "after": "actual", "start_char_pos": 1554, "end_char_pos": 1554}, {"type": "R", "before": "a", "after": "an important", "start_char_pos": 1709, "end_char_pos": 1710}], "sents_char_pos": [0, 80, 145, 246, 505, 665, 834, 952, 1093, 1395, 1594]} {"doc_id": "1510.04165", "revision_depth": "1", "before_revision": "Energy efficiency significantly influences user experience of battery-driven devices such as smartphones and tablets. The goal of an energy model of source code is to lay a foundation for energy-saving techniques from architecture to software development. The challenge is linking hardware energy consumption to the high-level application source code, considering the complex run-time context , such as thread scheduling, user inputs and the abstraction of the virtual machine. Traditional energy modeling is bottom-to-top, but this approach faces obstacles when software consists of a number of abstract layers. In this paper, we propose a top-to-bottom view. We focus on identifying valuable information from the source code, which results in the idea of utilizing an intermediate representation, \"energy operation \" , to capture the energy characteristics. The experiment results show that the energy model at such a high-level can reduce the error margin to within 10\\% and enable energy breakdown at function-level, which helps developers understand the energy-related features of the code .", "after_revision": "Energy efficiency has a significant influence on user experience of battery-driven devices such as smartphones and tablets. The goal of an energy model for source code is to lay a foundation for the application of energy-saving techniques during software development. The challenge is to relate hardware energy consumption to high-level application code, considering the complex run-time context and software stack. Traditional techniques build the energy model by mapping a hardware energy model onto software constructs; this approach faces obstacles when the software stack consists of a number of abstract layers. Another approach that has been followed is to utilize hardware or operating system features to estimate software energy information at a coarse level of granularity such as blocks, methods or even applications. In this paper, we explain how to construct a fine-grained energy model for the source code, which is based on \"energy operations \" identified directly from the source code and able to provide more valuable information for code optimization. We apply the approach to a class of applications based on a game-engine, and explain the wider applicability of the method .", "edit_actions": [{"type": "R", "before": "significantly influences", "after": "has a significant influence on", "start_char_pos": 18, "end_char_pos": 42}, {"type": "R", "before": "of", "after": "for", "start_char_pos": 146, "end_char_pos": 148}, {"type": "A", "before": null, "after": "the application of", "start_char_pos": 188, "end_char_pos": 188}, {"type": "R", "before": "from architecture to", "after": "during", "start_char_pos": 214, "end_char_pos": 234}, {"type": "R", "before": "linking", "after": "to relate", "start_char_pos": 274, "end_char_pos": 281}, {"type": "D", "before": "the", "after": null, "start_char_pos": 313, "end_char_pos": 316}, {"type": "D", "before": "source", "after": null, "start_char_pos": 340, "end_char_pos": 346}, {"type": "R", "before": ", such as thread scheduling, user inputs and the abstraction of the virtual machine. Traditional energy modeling is bottom-to-top, but", "after": "and software stack. Traditional techniques build the energy model by mapping a hardware energy model onto software constructs;", "start_char_pos": 394, "end_char_pos": 528}, {"type": "R", "before": "software", "after": "the software stack", "start_char_pos": 564, "end_char_pos": 572}, {"type": "A", "before": null, "after": "Another approach that has been followed is to utilize hardware or operating system features to estimate software energy information at a coarse level of granularity such as blocks, methods or even applications.", "start_char_pos": 614, "end_char_pos": 614}, {"type": "R", "before": "propose a top-to-bottom view. We focus on identifying valuable information from", "after": "explain how to construct a fine-grained energy model for", "start_char_pos": 633, "end_char_pos": 712}, {"type": "R", "before": "results in the idea of utilizing an intermediate representation,", "after": "is based on", "start_char_pos": 736, "end_char_pos": 800}, {"type": "R", "before": "operation", "after": "operations", "start_char_pos": 809, "end_char_pos": 818}, {"type": "R", "before": ", to capture the energy characteristics. The experiment results show that the energy model at such a high-level can reduce the error margin to within 10\\% and enable energy breakdown at function-level, which helps developers understand the energy-related features of the code", "after": "identified directly from the source code and able to provide more valuable information for code optimization. We apply the approach to a class of applications based on a game-engine, and explain the wider applicability of the method", "start_char_pos": 821, "end_char_pos": 1096}], "sents_char_pos": [0, 117, 256, 478, 613, 662, 861]} {"doc_id": "1510.05858", "revision_depth": "2", "before_revision": "This paper considers a market model with two levels of information. The public information generated by the financial assets, and a larger flow of information containing additional knowledge about a death time(random time /horizon) of an insured. By expanding the filtration, the death uncertainty and its entailed risk are fully considered without any mathematical restriction. In this context , which catches real features such as correlation between the market model and the time of death, we address the risk-minimization problem \\`a la F\\\"ollmer-Sondermann for a large class of equity-linked mortality contracts. The challenge in this setting, when no modelspecification for these securities nor for the death timeis given, lies in finding the dynamics and the structures for the mortality / longevity securities used in the securitization. To overcome this obstacle, we elaborate our optional martingale representation results, which state that any local martingale in the large filtration stopped at the death time can be decomposed into several and precise orthogonal local martingales . This constitutes our first principal novel contribution. Thanks to this optional representation, we succeed to decompose the risk in some popular mortality and/or longevity securities into the sum of orthogonal risks using a risk basis. One of the components of this basis is a new martingale, in the large filtration, that possesses nice features. Hence, the dynamics of mortality and longevity securities used in the securitization is described without mortality specification, and this constitutes our second novel contribution. Our third main contribution resides in finding explicitly the risk-minimization strategy as well as the corresponding undiversified risk for a largest class of mortality/longevity linked liabilities with or without the mortality securitization .", "after_revision": "We consider a market model where there are two levels of information. The public information generated by the financial assets, and a larger flow of information that contains additional knowledge about a random time. This random time can represent many economic and financial settings, such as the default time of a firm for credit risk, and the death time of an insured for life insurance. By using the expansion of filtration, the random time uncertainty and its entailed risk are fully considered without any mathematical restriction. In this context with no model's specification for the random time, the main challenge lies in finding the dynamics and the structures for the value processes of defaultable or mortality and / or longevity securities which are vital for the insurance securitization. To overcome this obstacle, we elaborate our optional martingale representation results, which state that any martingale in the large filtration stopped at the random time can be decomposed into precise and unique orthogonal local martingales (i.e. local martingales whose product remains a local martingale). This constitutes our first and probably the principal contribution. Even though the driving motivation for this representation resides in credit risk theory, our results are applicable to several other financial and economics contexts, such as life insurance and financial markets with random horizon. Thanks to this optional representation, we decompose any defaultable or mortality and/or longevity liability into the sum of \"non-correlated\" risks using a risk basis. This constitutes our second contribution .", "edit_actions": [{"type": "R", "before": "This paper considers", "after": "We consider", "start_char_pos": 0, "end_char_pos": 20}, {"type": "R", "before": "with", "after": "where there are", "start_char_pos": 36, "end_char_pos": 40}, {"type": "R", "before": "containing", "after": "that contains", "start_char_pos": 159, "end_char_pos": 169}, {"type": "R", "before": "death time(random time /horizon) of an insured. By expanding the", "after": "random time. This random time can represent many economic and financial settings, such as the default time of a firm for credit risk, and the death time of an insured for life insurance. By using the expansion of", "start_char_pos": 199, "end_char_pos": 263}, {"type": "R", "before": "death", "after": "random time", "start_char_pos": 280, "end_char_pos": 285}, {"type": "R", "before": ", which catches real features such as correlation between the market model and the time of death, we address the risk-minimization problem \\`a la F\\\"ollmer-Sondermann for a large class of equity-linked mortality contracts. The challenge in this setting, when no modelspecification for these securities nor for the death timeis given,", "after": "with no model's specification for the random time, the main challenge", "start_char_pos": 395, "end_char_pos": 728}, {"type": "R", "before": "mortality", "after": "value processes of defaultable or mortality and", "start_char_pos": 785, "end_char_pos": 794}, {"type": "R", "before": "longevity securities used in the", "after": "or longevity securities which are vital for the insurance", "start_char_pos": 797, "end_char_pos": 829}, {"type": "D", "before": "local", "after": null, "start_char_pos": 955, "end_char_pos": 960}, {"type": "R", "before": "death", "after": "random", "start_char_pos": 1011, "end_char_pos": 1016}, {"type": "R", "before": "several and precise", "after": "precise and unique", "start_char_pos": 1045, "end_char_pos": 1064}, {"type": "R", "before": ".", "after": "(i.e. local martingales whose product remains a local martingale).", "start_char_pos": 1094, "end_char_pos": 1095}, {"type": "R", "before": "principal novel contribution.", "after": "and probably the principal contribution. Even though the driving motivation for this representation resides in credit risk theory, our results are applicable to several other financial and economics contexts, such as life insurance and financial markets with random horizon.", "start_char_pos": 1123, "end_char_pos": 1152}, {"type": "R", "before": "succeed to decompose the risk in some popular", "after": "decompose any defaultable or", "start_char_pos": 1196, "end_char_pos": 1241}, {"type": "R", "before": "securities", "after": "liability", "start_char_pos": 1269, "end_char_pos": 1279}, {"type": "R", "before": "orthogonal", "after": "\"non-correlated\"", "start_char_pos": 1296, "end_char_pos": 1306}, {"type": "R", "before": "One of the components of this basis is a new martingale, in the large filtration, that possesses nice features. Hence, the dynamics of mortality and longevity securities used in the securitization is described without mortality specification, and this", "after": "This", "start_char_pos": 1333, "end_char_pos": 1584}, {"type": "R", "before": "novel contribution. Our third main contribution resides in finding explicitly the risk-minimization strategy as well as the corresponding undiversified risk for a largest class of mortality/longevity linked liabilities with or without the mortality securitization", "after": "contribution", "start_char_pos": 1608, "end_char_pos": 1871}], "sents_char_pos": [0, 67, 246, 378, 617, 845, 1095, 1152, 1332, 1444, 1627]} {"doc_id": "1511.01667", "revision_depth": "2", "before_revision": "Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study the dynamics of the methane-thiosulfonate spin labelled (MTSL) Aurora-A kinase activation loop in a very short time and with good quality of the sampling. The MD simulation provided a wealth of information on the interactions between MTSL and protein residues, and on the different motional contributions to the overall dynamics of the MTSL that were validated using a multifrequency electron paramagnetic resonance (EPR) approach. The latter relayed on the frequency dependence of the resolution of the fast and slow motions of the spin probe and was used to distinguish the fast internal motion of the spin label from the slow protein tumbling. Data obtained from MD were in good agreement with those obtained from quantum mechanical (QM) methods, but more interactions within the dynamics of the system were revealed than from QM. A strong correlation between the tumbling of the protein and the transitions of the X4 dihedral angle of the MTSL was observed with a consequent effect on the distribution of the nitroxide (NO) group in space and time . The theoretical EPR spectra were calculated using selected configurations of MTSL probing different micro-environments of the protein characterized by different polarity. The comparison between the theoretical and experimental 9 GHz and 94 GHz EPR spectra revealed that some fits were in good agreement with the experimental EPR spectra, indicating a predominance of some conformational states of the full spin-labelled system. This work is a starting point for deeper experimental and theoretical studies of the diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity .", "after_revision": "Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study with low computational cost and good quality of the sampling the dynamics of the methane-thiosulfonate spin label (MTSL) attached to the activation loop of the Aurora-A kinase. MD provided a wealth of information about the timescale of the different motional contributions to the overall dynamics of the spin label. These data were validated by multi-frequency continuous-wave electron paramagnetic resonance (EPR) measurements, that relying on the frequency dependence of the fast and slow motions of the spin probe were used to distinguish the fast internal motion of the spin label from slow protein tumbling. It was found that the activation loop oscillated between two conformational states separated by 7 Angstrom and the average structures obtained from the MD trajectories showed the MTSL exposed to the solvent and probing the C-lobe of the protein . The theoretical 9 and 94 GHz EPR spectra were calculated using configurations representing the interactions between MTSL and water and the tyrosine residue 208 in the C-lobe; and the comparison with experimental EPR spectra revealed that fits successfully reproduced the experimental spectra in agreement with the MD results .", "edit_actions": [{"type": "R", "before": "the", "after": "with low computational cost and good quality of the sampling the", "start_char_pos": 178, "end_char_pos": 181}, {"type": "R", "before": "labelled", "after": "label", "start_char_pos": 225, "end_char_pos": 233}, {"type": "A", "before": null, "after": "attached to the activation loop of the", "start_char_pos": 241, "end_char_pos": 241}, {"type": "R", "before": "kinase activation loop in a very short time and with good quality of the sampling. The MD simulation", "after": "kinase. MD", "start_char_pos": 251, "end_char_pos": 351}, {"type": "R", "before": "on the interactions between MTSL and protein residues, and on", "after": "about the timescale of", "start_char_pos": 385, "end_char_pos": 446}, {"type": "R", "before": "MTSL that were validated using a multifrequency", "after": "spin label. These data were validated by multi-frequency continuous-wave", "start_char_pos": 515, "end_char_pos": 562}, {"type": "R", "before": "approach. The latter relayed", "after": "measurements, that relying", "start_char_pos": 601, "end_char_pos": 629}, {"type": "D", "before": "resolution of the", "after": null, "start_char_pos": 665, "end_char_pos": 682}, {"type": "R", "before": "and was", "after": "were", "start_char_pos": 723, "end_char_pos": 730}, {"type": "D", "before": "the", "after": null, "start_char_pos": 799, "end_char_pos": 802}, {"type": "R", "before": "Data obtained from MD were in good agreement with those obtained from quantum mechanical (QM) methods, but more interactions within the dynamics of the system were revealed than from QM. A strong correlation between the tumbling of the protein and the transitions of the X4 dihedral angle of the MTSL was observed with a consequent effect on the distribution of the nitroxide (NO) group in space and time", "after": "It was found that the activation loop oscillated between two conformational states separated by 7 Angstrom and the average structures obtained from the MD trajectories showed the MTSL exposed to the solvent and probing the C-lobe of the protein", "start_char_pos": 826, "end_char_pos": 1230}, {"type": "A", "before": null, "after": "9 and 94 GHz", "start_char_pos": 1249, "end_char_pos": 1249}, {"type": "R", "before": "selected configurations of MTSL probing different micro-environments of the protein characterized by different polarity. The comparison between the theoretical and experimental 9 GHz and 94 GHz", "after": "configurations representing the interactions between MTSL and water and the tyrosine residue 208 in the C-lobe; and the comparison with experimental", "start_char_pos": 1284, "end_char_pos": 1477}, {"type": "R", "before": "some fits were in good", "after": "fits successfully reproduced the experimental spectra in", "start_char_pos": 1504, "end_char_pos": 1526}, {"type": "R", "before": "experimental EPR spectra, indicating a predominance of some conformational states of the full spin-labelled system. This work is a starting point for deeper experimental and theoretical studies of the diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity", "after": "MD results", "start_char_pos": 1546, "end_char_pos": 1854}], "sents_char_pos": [0, 333, 610, 825, 1012, 1232, 1404, 1661]} {"doc_id": "1511.09041", "revision_depth": "1", "before_revision": "We study pricing and superhedging strategies for game options in an imperfect market with default. We extend the results obtained by Kifer Kifer in the case of a perfect market model to the case of imperfections on the market taken into account via the nonlinearity of the wealth dynamics. In this framework, the pricing system is expressed as a nonlinear g-expectation/evaluation induced by a nonlinear BSDE with jump. We prove that the superhedging price of a game option{\\em { coincides with the value function of a corresponding {\\em generalized} Dynkin game expressed in terms of the g-evaluation , recently introduced in DQS2 . We then address the case of ambiguity on the model, - for example an ambiguity on the default probability - , and characterize the superhedging price of a game option as the value function of a {\\em mixed generalized} Dynkin game. We prove the existence of a cancellation time and a trading strategy for the seller which allow him/her to be super-hedged, whatever the model is . This study is introduced by the analysis of the simpler case of American options .", "after_revision": "We study pricing and superhedging strategies for game options in an imperfect market with default. We extend the results obtained by Kifer in Kifer in the case of a perfect market model to the case of an imperfect market with default, when the imperfections are taken into account via the nonlinearity of the wealth dynamics. We introduce the{\\em seller's price of the game option as the infimum of the initial wealths which allow the seller to be superhedged. We{prove that this price coincides with the value function of an associated {\\em generalized} Dynkin game , recently introduced in DQS2 , expressed with a nonlinear expectation induced by a nonlinear BSDE with default jump. We moreover study the existence of superhedging strategies. We then address the case of ambiguity on the model, - for example ambiguity on the default probability - and characterize the robust seller's price of a game option as the value function of a {\\em mixed generalized} Dynkin game. We study the existence of a cancellation time and a trading strategy which allow the seller to be super-hedged, whatever the model is .", "edit_actions": [{"type": "A", "before": null, "after": "in", "start_char_pos": 139, "end_char_pos": 139}, {"type": "R", "before": "imperfections on the market", "after": "an imperfect market with default, when the imperfections are", "start_char_pos": 199, "end_char_pos": 226}, {"type": "R", "before": "In this framework, the pricing system is expressed as a nonlinear g-expectation/evaluation induced by a nonlinear BSDE with jump. We prove that the superhedging price of a game option", "after": "We introduce the", "start_char_pos": 291, "end_char_pos": 474}, {"type": "A", "before": null, "after": "seller's price", "start_char_pos": 479, "end_char_pos": 479}, {"type": "A", "before": null, "after": "of the game option as the infimum of the initial wealths which allow the seller to be superhedged. We", "start_char_pos": 480, "end_char_pos": 480}, {"type": "A", "before": null, "after": "prove", "start_char_pos": 481, "end_char_pos": 481}, {"type": "A", "before": null, "after": "that this price", "start_char_pos": 482, "end_char_pos": 482}, {"type": "R", "before": "a corresponding", "after": "an associated", "start_char_pos": 520, "end_char_pos": 535}, {"type": "D", "before": "expressed in terms of the g-evaluation", "after": null, "start_char_pos": 566, "end_char_pos": 604}, {"type": "R", "before": ". We", "after": ", expressed with a nonlinear expectation induced by a nonlinear BSDE with default jump. We moreover study the existence of superhedging strategies. We", "start_char_pos": 635, "end_char_pos": 639}, {"type": "D", "before": "an", "after": null, "start_char_pos": 703, "end_char_pos": 705}, {"type": "D", "before": ",", "after": null, "start_char_pos": 745, "end_char_pos": 746}, {"type": "R", "before": "superhedging", "after": "robust seller's", "start_char_pos": 768, "end_char_pos": 780}, {"type": "R", "before": "prove", "after": "study", "start_char_pos": 871, "end_char_pos": 876}, {"type": "R", "before": "for the seller which allow him/her", "after": "which allow the seller", "start_char_pos": 937, "end_char_pos": 971}, {"type": "D", "before": ". This study is introduced by the analysis of the simpler case of American options", "after": null, "start_char_pos": 1014, "end_char_pos": 1096}], "sents_char_pos": [0, 98, 290, 420, 867, 1015]} {"doc_id": "1512.08085", "revision_depth": "1", "before_revision": "The origin of multicellularity is a fundamental open question in biology. For URLanisms to evolve from an aggregate of URLanisms, cells with an identical genotype must first differentiate into several types. Second, this aggregate of distinct cell types should show better growth than that of an isolated cell in the environment . Third, this cell aggregate should show robustness in the number distribution of differentiated cell types. To reveal how an ensemble of primitive cells achieves these conditions, we developed a dynamical-systems model of cells consisting of chemical components with intracellular catalytic reaction dynamics. The reactions convert external nutrients to internal components for cellular growth, and the divided cells interact through chemical diffusion. We found that cells sharing an identical catalytic network spontaneously differentiate induced by cell-cell interactions, and then achieve cooperative division of labor, the mutual use of products among differentiated cell types, enabling a higher growth rate than that in the unicellular case. This symbiotic differentiation emerged for a class of reaction networks under the condition of nutrient limitation and strong cell-cell interactions. Then, robustness in the cell type distribution was achieved, while instability of collective growth sometimes emerged even among the cooperative cells when the internal reserves of chemical products is dominant. The simplicity and generality of the present mechanism suggests that evolution to multicellularity is a natural consequence of interacting cells with limited resources, being consistent with the behaviors and forms of several extant primitive forms of multicellularity, such as certain bacteria .", "after_revision": "As cells grow and divide under a given environment, they become crowded and resources are limited, as seen in bacterial biofilms and multicellular aggregates. These cells often show strong interactions through exchanging chemicals, as in quorum sensing, to achieve mutualism. Here, to achieve stable division of labor, three properties are required. First, isogenous cells differentiate into several types. Second, this aggregate of distinct cell types shows better growth than that of isolated cells, by achieving division of labor . Third, this cell aggregate is robust in the number distribution of differentiated cell types. We here address how cells acquire the ability of cell differentiation and division of labor simultaneously, which is also connected with the robustness of a cell society. For this purpose, we developed a dynamical-systems model of cells consisting of chemical components with intracellular catalytic reaction dynamics. The reactions convert external nutrients into internal components for cellular growth, and the divided cells interact via chemical diffusion. We found that cells sharing an identical catalytic network spontaneously differentiate via induction from cell-cell interactions, and then achieve division of labor, enabling a higher growth rate than that in the unicellular case. This symbiotic differentiation emerged for a class of reaction networks with limited resources and strong cell-cell interactions. Then, robustness in the cell type distribution was achieved, while instability of collective growth could emerge even among the cooperative cells when the internal reserves of products were dominant. The present mechanism is simple and general as a natural result of interacting cells with resource limitation, and is consistent with the observed behaviors and forms of several aggregates of URLanisms .", "edit_actions": [{"type": "R", "before": "The origin of multicellularity is a fundamental open question in biology. For URLanisms to evolve from an aggregate of URLanisms, cells with an identical genotype must first", "after": "As cells grow and divide under a given environment, they become crowded and resources are limited, as seen in bacterial biofilms and multicellular aggregates. These cells often show strong interactions through exchanging chemicals, as in quorum sensing, to achieve mutualism. Here, to achieve stable division of labor, three properties are required. First, isogenous cells", "start_char_pos": 0, "end_char_pos": 173}, {"type": "R", "before": "should show", "after": "shows", "start_char_pos": 254, "end_char_pos": 265}, {"type": "R", "before": "an isolated cell in the environment", "after": "isolated cells, by achieving division of labor", "start_char_pos": 293, "end_char_pos": 328}, {"type": "R", "before": "should show robustness", "after": "is robust", "start_char_pos": 358, "end_char_pos": 380}, {"type": "R", "before": "To reveal how an ensemble of primitive cells achieves these conditions,", "after": "We here address how cells acquire the ability of cell differentiation and division of labor simultaneously, which is also connected with the robustness of a cell society. For this purpose,", "start_char_pos": 438, "end_char_pos": 509}, {"type": "R", "before": "to", "after": "into", "start_char_pos": 681, "end_char_pos": 683}, {"type": "R", "before": "through", "after": "via", "start_char_pos": 756, "end_char_pos": 763}, {"type": "R", "before": "induced by", "after": "via induction from", "start_char_pos": 871, "end_char_pos": 881}, {"type": "D", "before": "cooperative", "after": null, "start_char_pos": 923, "end_char_pos": 934}, {"type": "D", "before": "the mutual use of products among differentiated cell types,", "after": null, "start_char_pos": 954, "end_char_pos": 1013}, {"type": "R", "before": "under the condition of nutrient limitation", "after": "with limited resources", "start_char_pos": 1151, "end_char_pos": 1193}, {"type": "R", "before": "sometimes emerged", "after": "could emerge", "start_char_pos": 1329, "end_char_pos": 1346}, {"type": "R", "before": "chemical products is", "after": "products were", "start_char_pos": 1410, "end_char_pos": 1430}, {"type": "R", "before": "simplicity and generality of the present mechanism suggests that evolution to multicellularity is a natural consequence", "after": "present mechanism is simple and general as a natural result", "start_char_pos": 1445, "end_char_pos": 1564}, {"type": "R", "before": "limited resources, being", "after": "resource limitation, and is", "start_char_pos": 1591, "end_char_pos": 1615}, {"type": "A", "before": null, "after": "observed", "start_char_pos": 1636, "end_char_pos": 1636}, {"type": "R", "before": "extant primitive forms of multicellularity, such as certain bacteria", "after": "aggregates of URLanisms", "start_char_pos": 1668, "end_char_pos": 1736}], "sents_char_pos": [0, 73, 207, 330, 437, 639, 783, 1078, 1228, 1440]} {"doc_id": "1601.04110", "revision_depth": "1", "before_revision": "On the basis of experimental data and mathematical equations in the literature, we remodel the ionic dynamics of smooth muscle cells (SMCs) as an eigensystem formulation, that is valid for investigating finite variations from equilibrium like in common experimental operations. This algorithm provides an alternate viewpoint on frequency-domain analysis and enables one to quantify the synchronizing timings among signaling pathways for rhythmical calcium oscillations . Numerical results show three types of calcium oscillations of SMCs in mesenteric arterioles: the spontaneous calcium oscillation, the agonist-dependent calcium oscillation, and the agonist-dependent calcium spike. For the noticeable agonist-dependent types in mesenteric SMCs, we demonstrate the flow of signaling pathways associated with intracellular calcium oscillationsand find the in-phase (out-phase) tendency among cation-cation (cation-anion), which implies the maximization of charge oscillations. For involving intercellular calcium dynamics in finite cell clusters, we observe the window-broadening of the oscillation-frequency spectrum under increasing cell numbers, which hence raises the possibility of synchronized oscillations for vasomotions. Our analyses indicate that the rhythm of SMC cluster could (1) correspond to significant enhancements of signal communications among remote cells, (2) respond to calcium sparks against transient stimulations for driving follow-up globally-oscillating modes, and (3) characterize the globally-oscillating modes via frog-leap (non-molecular-diffusion) calcium waves across inhomogeneous SMCs.", "after_revision": "On the basis of experimental data and mathematical equations in the literature, we remodel the ionic dynamics of smooth muscle cells (SMCs) as an eigensystem formulation, which is valid for investigating finite variations of variables from the equilibrium like in common experimental operations. This algorithm provides an alternate viewpoint from frequency-domain analysis and enables one to probe functionalities of SMC's rhythm by means of a resonance-related mechanism . Numerical results show three types of calcium oscillations of SMCs in mesenteric arterioles: spontaneous calcium oscillation, agonist-dependent calcium oscillation, and agonist-dependent calcium spike. For simple single and double SMCs, we demonstrate properties of synchronization among complex signals related to calcium oscillations, and show different correlation relations between calcium and voltage signals for various synchronization and resonance conditions. For practical cell clusters, our analyses indicate that the rhythm of SMCs could (1) benefit enhancements of signal communications among remote cells, (2) respond to a significant calcium peaking against transient stimulations for triggering globally-oscillating modes, and (3) characterize the globally-oscillating modes via frog-leap (non-molecular-diffusion) calcium waves across inhomogeneous SMCs.", "edit_actions": [{"type": "R", "before": "that", "after": "which", "start_char_pos": 171, "end_char_pos": 175}, {"type": "R", "before": "from", "after": "of variables from the", "start_char_pos": 221, "end_char_pos": 225}, {"type": "R", "before": "on", "after": "from", "start_char_pos": 325, "end_char_pos": 327}, {"type": "R", "before": "quantify the synchronizing timings among signaling pathways for rhythmical calcium oscillations", "after": "probe functionalities of SMC's rhythm by means of a resonance-related mechanism", "start_char_pos": 373, "end_char_pos": 468}, {"type": "D", "before": "the", "after": null, "start_char_pos": 564, "end_char_pos": 567}, {"type": "D", "before": "the", "after": null, "start_char_pos": 601, "end_char_pos": 604}, {"type": "D", "before": "the", "after": null, "start_char_pos": 648, "end_char_pos": 651}, {"type": "R", "before": "the noticeable agonist-dependent types in mesenteric", "after": "simple single and double", "start_char_pos": 689, "end_char_pos": 741}, {"type": "R", "before": "the flow of signaling pathways associated with intracellular calcium oscillationsand find the in-phase (out-phase) tendency among cation-cation (cation-anion), which implies the maximization of charge oscillations. For involving intercellular calcium dynamics in finite", "after": "properties of synchronization among complex signals related to calcium oscillations, and show different correlation relations between calcium and voltage signals for various synchronization and resonance conditions. For practical", "start_char_pos": 763, "end_char_pos": 1032}, {"type": "R", "before": "we observe the window-broadening of the oscillation-frequency spectrum under increasing cell numbers, which hence raises the possibility of synchronized oscillations for vasomotions. Our", "after": "our", "start_char_pos": 1048, "end_char_pos": 1234}, {"type": "R", "before": "SMC cluster", "after": "SMCs", "start_char_pos": 1272, "end_char_pos": 1283}, {"type": "R", "before": "correspond to significant", "after": "benefit", "start_char_pos": 1294, "end_char_pos": 1319}, {"type": "R", "before": "calcium sparks", "after": "a significant calcium peaking", "start_char_pos": 1393, "end_char_pos": 1407}, {"type": "R", "before": "driving follow-up", "after": "triggering", "start_char_pos": 1443, "end_char_pos": 1460}], "sents_char_pos": [0, 277, 684, 977, 1230]} {"doc_id": "1601.05506", "revision_depth": "1", "before_revision": "Diffusion-based network models are widely used for protein function prediction using protein network data , and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernels. Only rarely has the hierarchy been used as a second layer of the network connecting the protein network with functional annotations . We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information flows on this two-layer model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank uses an adaptive diffusion mechanism . We evaluate the ability of both methods to predict protein function on yeast, fly, and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We find that both BirgRank and AptRank outperform the previous methods, especially when only 10\\% of the data are given for training. AptRank naturally combines protein-protein associations and function-function relationships into a two-layer network model , and takes full advantage of the hierarchical structure of the Gene Ontology, using directional diffusion without flattening the ontological hierarchy into a similarity kernel. Introducing an adaptive mechanism to the traditional, fixed-parameter model of PageRank greatly improves the accuracy of protein function prediction.", "after_revision": "Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood- and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model . We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is an application of traditional PageRank with fixed decay parameters. In contrast, AptRank uses an adaptive mechanism to improve the performance of BirgRank . We evaluate both methods in predicting protein function on yeast, fly, and human datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design three validation strategies: missing function prediction, de novo function prediction, and guided function prediction to comprehensively evaluate all six methods. We find that both BirgRank and AptRank outperform the others, especially in missing function prediction when using only 10\\% of the data for training. AptRank combines protein-protein associations and the GO function-function hierarchy into a two-layer network model without flattening the hierarchy into a similarity kernel. Introducing an adaptive mechanism to the traditional, fixed-parameter model of PageRank greatly improves the accuracy of protein function prediction.", "edit_actions": [{"type": "D", "before": ",", "after": null, "start_char_pos": 106, "end_char_pos": 107}, {"type": "R", "before": "neighborhood-based", "after": "neighborhood-", "start_char_pos": 142, "end_char_pos": 160}, {"type": "A", "before": null, "after": "either", "start_char_pos": 365, "end_char_pos": 365}, {"type": "R", "before": "kernels. Only rarely has the hierarchy been used as a second layer of the network connecting the protein network with functional annotations", "after": "kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model", "start_char_pos": 509, "end_char_pos": 649}, {"type": "D", "before": "flows", "after": null, "start_char_pos": 914, "end_char_pos": 919}, {"type": "A", "before": null, "after": "graph", "start_char_pos": 938, "end_char_pos": 938}, {"type": "R", "before": "a direct", "after": "an", "start_char_pos": 958, "end_char_pos": 966}, {"type": "R", "before": "diffusion mechanism", "after": "mechanism to improve the performance of BirgRank", "start_char_pos": 1070, "end_char_pos": 1089}, {"type": "R", "before": "the ability of both methods to predict", "after": "both methods in predicting", "start_char_pos": 1104, "end_char_pos": 1142}, {"type": "D", "before": "protein", "after": null, "start_char_pos": 1185, "end_char_pos": 1192}, {"type": "A", "before": null, "after": "design three validation strategies: missing function prediction, de novo function prediction, and guided function prediction to comprehensively evaluate all six methods. We", "start_char_pos": 1287, "end_char_pos": 1287}, {"type": "R", "before": "previous methods, especially when", "after": "others, especially in missing function prediction when using", "start_char_pos": 1339, "end_char_pos": 1372}, {"type": "D", "before": "are given", "after": null, "start_char_pos": 1395, "end_char_pos": 1404}, {"type": "D", "before": "naturally", "after": null, "start_char_pos": 1427, "end_char_pos": 1436}, {"type": "A", "before": null, "after": "the GO", "start_char_pos": 1479, "end_char_pos": 1479}, {"type": "R", "before": "relationships", "after": "hierarchy", "start_char_pos": 1498, "end_char_pos": 1511}, {"type": "D", "before": ", and takes full advantage of the hierarchical structure of the Gene Ontology, using directional diffusion", "after": null, "start_char_pos": 1543, "end_char_pos": 1649}, {"type": "D", "before": "ontological", "after": null, "start_char_pos": 1673, "end_char_pos": 1684}], "sents_char_pos": [0, 186, 330, 517, 651, 796, 945, 1031, 1091, 1283, 1418, 1720]} {"doc_id": "1603.07074", "revision_depth": "1", "before_revision": "Recently, based on the idea of randomizing space theory, random convex analysis has been being developed in order to deal with the corresponding problems in random environments such as analysis of conditional convex risk measures and the related variational problems and optimization problems. Random convex analysis is convex analysis over random locally convex modules. Since random locally convex modules have the more complicated topological and algebraic structures than ordinary locally convex spaces, establishing random convex analysis will encounter harder mathematical challenges than classical convex analysis so that there are still a lot of fundamentally important unsolved problems in random convex analysis. This paper is devoted to solving some important theoretic problems. First, we establish the inferior limit behavior of a proper lower semicontinuous L ^{0 function on a random locally convex module endowed with the locally L ^{0 topology, which makes perfect the Fenchel-Moreau duality theorem for such functions. Then, we investigate the relations among continuity, locally L ^{0 continuity and almost surely sequent continuity of a proper L ^{0 function. And then, we establish the elegant relationships among subdifferentiability, Gateaux-differentiability and Frechet-differentiability for a proper L ^{0 function defined on random normed modules. At last, based on the Ekeland's variational principle for a proper lower semicontinuous- \\bar{L} ^{0 function, we show that \\epsilon-subdifferentials can be approximated by subdifferentials. We would like to emphasize that the success of this paper lies in simultaneously considering the ( \\epsilon , \\lambda) -topology and the locally L ^{0 topology for a random locally convex module.", "after_revision": "Recently, based on the idea of randomizing space theory, random convex analysis has been being developed in order to deal with the corresponding problems in random environments such as analysis of conditional convex risk measures and the related variational problems and optimization problems. Random convex analysis is convex analysis over random locally convex modules. Since random locally convex modules have the more complicated topological and algebraic structures than ordinary locally convex spaces, establishing random convex analysis will encounter harder mathematical challenges than classical convex analysis so that there are still a lot of fundamentally important unsolved problems in random convex analysis. This paper is devoted to solving some important theoretic problems. First, we establish the inferior limit behavior of a proper lower semicontinuous L ^0--convex function on a random locally convex module endowed with the locally L ^0--convex topology, which makes perfect the Fenchel--Moreau duality theorem for such functions. Then, we investigate the relations among continuity, locally L ^0--Lipschitzian continuity and almost surely sequent continuity of a proper L ^0--convex function. And then, we establish the elegant relationships among subdifferentiability, G\\^ateaux--differentiability and Fr\\'ech\\'et--differentiability for a proper L ^0--convex function defined on random normed modules. At last, based on the Ekeland's variational principle for a proper lower semicontinuous \\bar{L} ^0--valued function, we show that \\varepsilon--subdifferentials can be approximated by subdifferentials. We would like to emphasize that the success of this paper lies in simultaneously considering the ( \\varepsilon , \\lambda) --topology and the locally L ^0--convex topology for a random locally convex module.", "edit_actions": [{"type": "R", "before": "^{0", "after": "^0--convex", "start_char_pos": 874, "end_char_pos": 877}, {"type": "R", "before": "^{0", "after": "^0--convex", "start_char_pos": 948, "end_char_pos": 951}, {"type": "R", "before": "Fenchel-Moreau", "after": "Fenchel--Moreau", "start_char_pos": 986, "end_char_pos": 1000}, {"type": "R", "before": "^{0", "after": "^0--Lipschitzian", "start_char_pos": 1100, "end_char_pos": 1103}, {"type": "R", "before": "^{0", "after": "^0--convex", "start_char_pos": 1166, "end_char_pos": 1169}, {"type": "R", "before": "Gateaux-differentiability and Frechet-differentiability", "after": "G\\^ateaux--differentiability and Fr\\'ech\\'et--differentiability", "start_char_pos": 1257, "end_char_pos": 1312}, {"type": "R", "before": "^{0", "after": "^0--convex", "start_char_pos": 1328, "end_char_pos": 1331}, {"type": "R", "before": "semicontinuous-", "after": "semicontinuous", "start_char_pos": 1448, "end_char_pos": 1463}, {"type": "R", "before": "^{0", "after": "^0--valued", "start_char_pos": 1472, "end_char_pos": 1475}, {"type": "R", "before": "\\epsilon-subdifferentials", "after": "\\varepsilon--subdifferentials", "start_char_pos": 1499, "end_char_pos": 1524}, {"type": "R", "before": "\\epsilon", "after": "\\varepsilon", "start_char_pos": 1665, "end_char_pos": 1673}, {"type": "R", "before": "-topology", "after": "--topology", "start_char_pos": 1685, "end_char_pos": 1694}, {"type": "R", "before": "^{0", "after": "^0--convex", "start_char_pos": 1713, "end_char_pos": 1716}], "sents_char_pos": [0, 293, 371, 722, 790, 1036, 1179, 1374, 1565]} {"doc_id": "1604.06131", "revision_depth": "1", "before_revision": "Several URLanisms, such as bacteria, algae, or spermatozoa, use flagellum or cilium activity to swim in a fluid . Many URLanisms use rather ample shape deformation, described as amoeboid, to propel themselves , either crawling on a substrate or swimming. Many eukaryotic cells were believed to require an underlying substratum to migrate (crawl) by using ample membrane deformation (like blebbing ). There is now an increasing evidence that a large variety of cells (including those of the immune system) can migrate without the assistance of focal adhesion, and can perform swimming as efficiently as crawling . This paper deals with a detailed analysis of amoeboid swimming in a confined fluid , by modeling the swimmer as an inextensible membrane deploying local active forces. The swimmer exhibits a rich behavior: it can settle into a straight trajectory in the channel , or can navigate from one wall to the other , depending on confinement. Furthermore, the nature of the swimmer is found to be affected by the confinement: the swimmer can behave as a pusher at low confinement, and becomes a puller at higher confinement , or vice versa; this shows that the swimmernature is not an intrinsic property. The scaling of the swimmer velocity V with the force amplitude A is analyzed in details and this shows that at small enough A, V\\sim A^2/\\eta^2, whereas at large enough A, V is independent of the force and is only fixed by the stroke cycle frequency and the swimmer size. This finding markedly contrasts with results known for swimming models referring to motion based on cilium and flagellum activity where V\\sim A/\\eta. Two types of efficiencies, put forward in the literature , are analyzed and it is found that the outcomes from each definition are quite distinct . We find that one type of efficiency has an optimum at a given confinement while the other has none . Future perspectives are outlined.", "after_revision": "Several URLanisms, such as bacteria, algae, or spermatozoa, use flagella or cilia to swim in a fluid , while many other URLanisms instead use ample shape deformation, described as amoeboid, to propel themselves by either crawling on a substrate or swimming. Many eukaryotic cells were believed to require an underlying substratum to migrate (crawl) by using membrane deformation (like blebbing or generation of lamellipodia) but there is now increasing evidence that a large variety of cells (including those of the immune system) can migrate without the assistance of focal adhesion, allowing them to swim as efficiently as they can crawl . This paper details the analysis of amoeboid swimming in a confined fluid by modeling the swimmer as an inextensible membrane deploying local active forces. The swimmer displays a rich behavior: it may settle into a straight trajectory in the channel or navigate from one wall to the other depending on its confinement. The nature of the swimmer is also found to be affected by confinement: the swimmer can behave , on the average over one swimming cycle, as a pusher at low confinement, and becomes a puller at higher confinement . The swimmer's nature is thus not an intrinsic property. The scaling of the swimmer velocity V with the force amplitude A is analyzed in detail showing that at small enough A, V\\sim A^2/\\eta^2, whereas at large enough A, V is independent of the force and is determined solely by the stroke frequency and swimmer size. This finding starkly contrasts with currently known results found from swimming models where motion is based on flagellar or ciliary activity, where V\\sim A/\\eta. To conclude, two definitions of efficiency as put forward in the literature are analyzed with distinct outcomes . We find that one type of efficiency has an optimum as a function of confinement while the other does not . Future perspectives are outlined.", "edit_actions": [{"type": "R", "before": "flagellum or cilium activity", "after": "flagella or cilia", "start_char_pos": 64, "end_char_pos": 92}, {"type": "R", "before": ". Many URLanisms use rather", "after": ", while many other URLanisms instead use", "start_char_pos": 112, "end_char_pos": 139}, {"type": "R", "before": ",", "after": "by", "start_char_pos": 209, "end_char_pos": 210}, {"type": "D", "before": "ample", "after": null, "start_char_pos": 355, "end_char_pos": 360}, {"type": "R", "before": "). There is now an", "after": "or generation of lamellipodia) but there is now", "start_char_pos": 397, "end_char_pos": 415}, {"type": "R", "before": "and can perform swimming", "after": "allowing them to swim", "start_char_pos": 559, "end_char_pos": 583}, {"type": "R", "before": "crawling", "after": "they can crawl", "start_char_pos": 602, "end_char_pos": 610}, {"type": "R", "before": "deals with a detailed", "after": "details the", "start_char_pos": 624, "end_char_pos": 645}, {"type": "D", "before": ",", "after": null, "start_char_pos": 696, "end_char_pos": 697}, {"type": "R", "before": "exhibits", "after": "displays", "start_char_pos": 793, "end_char_pos": 801}, {"type": "R", "before": "can", "after": "may", "start_char_pos": 822, "end_char_pos": 825}, {"type": "R", "before": ", or can", "after": "or", "start_char_pos": 875, "end_char_pos": 883}, {"type": "R", "before": ", depending on confinement. Furthermore, the", "after": "depending on its confinement. The", "start_char_pos": 920, "end_char_pos": 964}, {"type": "A", "before": null, "after": "also", "start_char_pos": 990, "end_char_pos": 990}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1015, "end_char_pos": 1018}, {"type": "A", "before": null, "after": ", on the average over one swimming cycle,", "start_char_pos": 1055, "end_char_pos": 1055}, {"type": "R", "before": ", or vice versa; this shows that the swimmernature is", "after": ". The swimmer's nature is thus", "start_char_pos": 1131, "end_char_pos": 1184}, {"type": "R", "before": "details and this shows", "after": "detail showing", "start_char_pos": 1292, "end_char_pos": 1314}, {"type": "R", "before": "only fixed", "after": "determined solely", "start_char_pos": 1421, "end_char_pos": 1431}, {"type": "R", "before": "cycle frequency and the", "after": "frequency and", "start_char_pos": 1446, "end_char_pos": 1469}, {"type": "R", "before": "markedly contrasts with results known for swimming models referring to motion based on cilium and flagellum activity", "after": "starkly contrasts with currently known results found from swimming models where motion is based on flagellar or ciliary activity,", "start_char_pos": 1497, "end_char_pos": 1613}, {"type": "R", "before": "Two types of efficiencies,", "after": "To conclude, two definitions of efficiency as", "start_char_pos": 1634, "end_char_pos": 1660}, {"type": "R", "before": ", are analyzed and it is found that the outcomes from each definition are quite distinct", "after": "are analyzed with distinct outcomes", "start_char_pos": 1691, "end_char_pos": 1779}, {"type": "R", "before": "at a given", "after": "as a function of", "start_char_pos": 1833, "end_char_pos": 1843}, {"type": "R", "before": "has none", "after": "does not", "start_char_pos": 1872, "end_char_pos": 1880}], "sents_char_pos": [0, 113, 254, 399, 612, 780, 947, 1147, 1211, 1483, 1633, 1781, 1882]} {"doc_id": "1605.02186", "revision_depth": "1", "before_revision": "Eukaryotic cells can detect the direction of a chemoattractant gradient precisely by generating intracellular gradients of signaling molecules that mirror the extracellular gradient ? an ability called directional sensing. Quantitative experiments have revealed two characteristic input-output relations of the system: First, in spatially-graded stimuli, the internal gradients of the signaling molecules at steady state reflect the relative steepness -- rather than the absolute concentration -- of the chemoattractant along the cell body. Second, upon a spatially homogeneous temporal increase in the chemoattractant concentration, the signaling molecule is transiently activated such that the response magnitude is determined by the ratio of the change in the input stimuli before and after the increase . However, the underlying mechanisms that endow the system with these response properties remain elusive. Here, by adopting a widely used modeling framework of directional sensing, i.e., local excitation and global inhibition (LEGI), we propose the hypothesis that the two scaling behaviors stem from a single design principle, namely, invariance of the governing equations to a scale transformation of the input level. Analyses of the LEGI-based model identify two different types of scale invariance , each of which is responsible for the two response properties. Our hypothesis leads to an experimentally testable prediction that a system with both types of invariance detects the relative steepness even in spatio-temporal gradient stimuli such as waves . Furthermore, we show that such behavior is independent of specific network topologies as long as the scale invariance holds, demonstrating the generic relationship between the scale invariance and the response properties .", "after_revision": "Eukaryotic cells respond to a chemoattractant gradient by forming intracellular gradients of signaling molecules that reflect the extracellular chemical gradient - an ability called directional sensing. Quantitative experiments have revealed two characteristic input-output relations of the system: First, in a static chemoattractant gradient, the shapes of the intracellular gradients of the signaling molecules are determined by the relative steepness , rather than the absolute concentration , of the chemoattractant gradient along the cell body. Second, upon a spatially homogeneous temporal increase in the input stimulus, the intracellular signaling molecules are transiently activated such that the response magnitudes are dependent on fold changes of the stimulus, not on absolute levels . However, the underlying mechanism that endows the system with these response properties remains elusive. Here, by adopting a widely used modeling framework of directional sensing, local excitation and global inhibition (LEGI), we propose a hypothesis that the two rescaling behaviors stem from a single design principle, namely, invariance of the governing equations to a scale transformation of the input level. Analyses of the LEGI-based model reveal that the invariance can be divided into two parts , each of which is responsible for the respective response properties. Our hypothesis leads to an experimentally testable prediction that a system with the invariance detects relative steepness even in dynamic gradient stimuli as well as in static gradients . Furthermore, we show that the relation between the response properties and the scale invariance is general in that it can be implemented by models with different network topologies .", "edit_actions": [{"type": "R", "before": "can detect the direction of", "after": "respond to", "start_char_pos": 17, "end_char_pos": 44}, {"type": "R", "before": "precisely by generating", "after": "by forming", "start_char_pos": 72, "end_char_pos": 95}, {"type": "R", "before": "mirror the extracellular gradient ?", "after": "reflect the extracellular chemical gradient -", "start_char_pos": 148, "end_char_pos": 183}, {"type": "R", "before": "spatially-graded stimuli, the internal", "after": "a static chemoattractant gradient, the shapes of the intracellular", "start_char_pos": 329, "end_char_pos": 367}, {"type": "R", "before": "at steady state reflect", "after": "are determined by", "start_char_pos": 405, "end_char_pos": 428}, {"type": "R", "before": "--", "after": ",", "start_char_pos": 452, "end_char_pos": 454}, {"type": "R", "before": "--", "after": ",", "start_char_pos": 494, "end_char_pos": 496}, {"type": "A", "before": null, "after": "gradient", "start_char_pos": 520, "end_char_pos": 520}, {"type": "R", "before": "chemoattractant concentration, the signaling molecule is", "after": "input stimulus, the intracellular signaling molecules are", "start_char_pos": 604, "end_char_pos": 660}, {"type": "R", "before": "magnitude is determined by the ratio of the change in the input stimuli before and after the increase", "after": "magnitudes are dependent on fold changes of the stimulus, not on absolute levels", "start_char_pos": 706, "end_char_pos": 807}, {"type": "R", "before": "mechanisms that endow", "after": "mechanism that endows", "start_char_pos": 834, "end_char_pos": 855}, {"type": "R", "before": "remain", "after": "remains", "start_char_pos": 898, "end_char_pos": 904}, {"type": "D", "before": "i.e.,", "after": null, "start_char_pos": 989, "end_char_pos": 994}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 1053, "end_char_pos": 1056}, {"type": "R", "before": "scaling", "after": "rescaling", "start_char_pos": 1081, "end_char_pos": 1088}, {"type": "R", "before": "identify two different types of scale invariance", "after": "reveal that the invariance can be divided into two parts", "start_char_pos": 1261, "end_char_pos": 1309}, {"type": "R", "before": "two", "after": "respective", "start_char_pos": 1349, "end_char_pos": 1352}, {"type": "R", "before": "both types of invariance detects the", "after": "the invariance detects", "start_char_pos": 1455, "end_char_pos": 1491}, {"type": "R", "before": "spatio-temporal gradient stimuli such as waves", "after": "dynamic gradient stimuli as well as in static gradients", "start_char_pos": 1519, "end_char_pos": 1565}, {"type": "R", "before": "such behavior is independent of specific network topologies as long as the scale invariance holds, demonstrating the generic relationship between the scale invariance and the response properties", "after": "the relation between the response properties and the scale invariance is general in that it can be implemented by models with different network topologies", "start_char_pos": 1594, "end_char_pos": 1788}], "sents_char_pos": [0, 222, 541, 809, 913, 1227, 1373, 1567]} {"doc_id": "1607.02687", "revision_depth": "1", "before_revision": "It is necessary to thoroughly evaluate the safety of Automated Vehicles (AVs) before their release and deployment. Current evaluation approach mainly relies on i) testing AVs on public roads or ii) track testing with scenarios defined in a test matrix. These two methods have completely opposite drawbacks: the former takes too much time to execute but is realistic ; the latter can be finished in a short time but has no clear correlation to the safety benefits in the real world. To avoid the aforementioned problems, we propose the Accelerated Evaluationapproach focusing on the car-following scenario. The stochastic human-controlled vehicle (HV) motions were modeled based on 1.3 million miles of naturalistic driving data collected by the University of Michigan Safety Pilot Model Deployment Program. The statistics of the HV behaviors were modified to generate more intense interactions between HVs and AVs to accelerate the evaluation procedure. The Importance Sampling theory was used to ensure that the safety benefits of AVs are accurately assessed under accelerated tests. Crash, injury and conflict rates for a simulated AV are simulated to demonstrate the proposed approach. Results show that the test duration is reduced by a factor of 300 to 100,000 compared with the non-accelerated (naturalistic) evaluation. In other words, the proposed techniques have great potential to accelerate the AV evaluation process.", "after_revision": "The safety of Automated Vehicles (AVs) must be assured before their release and deployment. The current approach to evaluation relies primarily on ( i) testing AVs on public roads or ( ii) track testing with scenarios defined in a test matrix. These two methods have completely opposing drawbacks: the former , while offering realistic scenarios, takes too much time to execute ; the latter , though it can be completed in a short amount of time, has no clear correlation to safety benefits in the real world. To avoid the aforementioned problems, we propose Accelerated Evaluation, focusing on the car-following scenario. The stochastic human-controlled vehicle (HV) motions are modeled based on 1.3 million miles of naturalistic driving data collected by the University of Michigan Safety Pilot Model Deployment Program. The statistics of the HV behaviors are then modified to generate more intense interactions between HVs and AVs to accelerate the evaluation procedure. The Importance Sampling theory was used to ensure that the safety benefits of AVs are accurately assessed under accelerated tests. Crash, injury and conflict rates for a simulated AV are simulated to demonstrate the proposed approach. Results show that test duration is reduced by a factor of 300 to 100,000 compared with the non-accelerated (naturalistic) evaluation. In other words, the proposed techniques have great potential for accelerating the AV evaluation process.", "edit_actions": [{"type": "R", "before": "It is necessary to thoroughly evaluate the", "after": "The", "start_char_pos": 0, "end_char_pos": 42}, {"type": "A", "before": null, "after": "must be assured", "start_char_pos": 78, "end_char_pos": 78}, {"type": "R", "before": "Current evaluation approach mainly relies on", "after": "The current approach to evaluation relies primarily on (", "start_char_pos": 116, "end_char_pos": 160}, {"type": "A", "before": null, "after": "(", "start_char_pos": 195, "end_char_pos": 195}, {"type": "R", "before": "opposite", "after": "opposing", "start_char_pos": 289, "end_char_pos": 297}, {"type": "A", "before": null, "after": ", while offering realistic scenarios,", "start_char_pos": 320, "end_char_pos": 320}, {"type": "D", "before": "but is realistic", "after": null, "start_char_pos": 352, "end_char_pos": 368}, {"type": "R", "before": "can be finished", "after": ", though it can be completed", "start_char_pos": 382, "end_char_pos": 397}, {"type": "R", "before": "time but", "after": "amount of time,", "start_char_pos": 409, "end_char_pos": 417}, {"type": "D", "before": "the", "after": null, "start_char_pos": 446, "end_char_pos": 449}, {"type": "R", "before": "the Accelerated Evaluationapproach", "after": "Accelerated Evaluation,", "start_char_pos": 534, "end_char_pos": 568}, {"type": "R", "before": "were", "after": "are", "start_char_pos": 662, "end_char_pos": 666}, {"type": "R", "before": "were", "after": "are then", "start_char_pos": 845, "end_char_pos": 849}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1210, "end_char_pos": 1213}, {"type": "R", "before": "to accelerate", "after": "for accelerating", "start_char_pos": 1391, "end_char_pos": 1404}], "sents_char_pos": [0, 115, 254, 370, 484, 608, 809, 956, 1087, 1191, 1329]} {"doc_id": "1608.02191", "revision_depth": "1", "before_revision": "The increased deployment of wireless networks for battery-limited industrial applications in recent years highlights the need for tractable performance analysis and efficient QoS-aware transmit power management schemes. Modern industrial solutions deploy multi-hop topologies in order to bridge larger distances without necessarily shortening nodes' battery lifetime. This poses a significant challenge, as multi-hop analysis for heterogeneous wireless networks does not exist prior to our work. We overcome this challenge by extending a newly developed methodology based on (min,x) network calculus and provide a closed-form expression for the end-to-end delay violation probability over a cascade of heterogeneous buffered wireless fading channels. We further design model-based algorithms for power-minimization and network lifetime maximization which compute the optimal transmit power per node, along a QoS-constrained path. Our numerical study shows an overall transmit power savings of up to 95\\% when compared to a fixed power allocation . We also apply our algorithm to a realistic WirelessHART network setup and observe that link heterogeneity can significantly influence network lifetime when no efficient power management is applied. This work is especially useful for battery-powered wireless sensor nodes in QoS-constrained applications and offers a solid framework for network design and performance analysis of heterogeneous multi-hop wireless industrial networks.", "after_revision": "The noticeably increased deployment of wireless networks for battery-limited industrial applications in recent years highlights the need for tractable performance analysis methodologies as well as efficient QoS-aware transmit power management schemes. In this work, we seek to combine several important aspects of such networks, i.e., multi-hop connectivity, channel heterogeneity and the queuing effect, in order to address these needs. We design delay-bound-based algorithms for transmit power minimization and network lifetime maximization of multi-hop heterogeneous wireless networks using our previously developed stochastic network calculus approach for performance analysis of a cascade of buffered wireless fading channels. Our analysis shows an overall transmit power saving of up to 95\\% compared to a fixed power allocation scheme when using a service model in terms of the Shannon capacity limit. For a more realistic set-up, we evaluate the performance of the suggested algorithm in a WirelessHART network, which is a widely used communication standard for process automation and other industrial applications. We find that link heterogeneity can significantly reduce network lifetime when no efficient power management is applied. Moreover, we show, using extensive simulation study, that the proposed bound-based power allocation performs reasonably well compared to the real optimum, especially in the case of WirelessHART networks.", "edit_actions": [{"type": "A", "before": null, "after": "noticeably", "start_char_pos": 4, "end_char_pos": 4}, {"type": "R", "before": "and", "after": "methodologies as well as", "start_char_pos": 162, "end_char_pos": 165}, {"type": "R", "before": "Modern industrial solutions deploy", "after": "In this work, we seek to combine several important aspects of such networks, i.e.,", "start_char_pos": 221, "end_char_pos": 255}, {"type": "R", "before": "topologies", "after": "connectivity, channel heterogeneity and the queuing effect,", "start_char_pos": 266, "end_char_pos": 276}, {"type": "R", "before": "bridge larger distances without necessarily shortening nodes' battery lifetime. This poses a significant challenge, as", "after": "address these needs. We design delay-bound-based algorithms for transmit power minimization and network lifetime maximization of", "start_char_pos": 289, "end_char_pos": 407}, {"type": "D", "before": "analysis for", "after": null, "start_char_pos": 418, "end_char_pos": 430}, {"type": "R", "before": "does not exist prior to our work. We overcome this challenge by extending a newly developed methodology based on (min,x) network calculus and provide a closed-form expression for the end-to-end delay violation probability over", "after": "using our previously developed stochastic network calculus approach for performance analysis of", "start_char_pos": 463, "end_char_pos": 689}, {"type": "D", "before": "heterogeneous", "after": null, "start_char_pos": 703, "end_char_pos": 716}, {"type": "R", "before": "We further design model-based algorithms for power-minimization and network lifetime maximization which compute the optimal transmit power per node, along a QoS-constrained path. Our numerical study", "after": "Our analysis", "start_char_pos": 752, "end_char_pos": 950}, {"type": "R", "before": "savings", "after": "saving", "start_char_pos": 983, "end_char_pos": 990}, {"type": "D", "before": "when", "after": null, "start_char_pos": 1005, "end_char_pos": 1009}, {"type": "R", "before": ". We also apply our algorithm to a realistic WirelessHART network setup and observe", "after": "scheme when using a service model in terms of the Shannon capacity limit. For a more realistic set-up, we evaluate the performance of the suggested algorithm in a WirelessHART network, which is a widely used communication standard for process automation and other industrial applications. We find", "start_char_pos": 1047, "end_char_pos": 1130}, {"type": "R", "before": "influence", "after": "reduce", "start_char_pos": 1173, "end_char_pos": 1182}, {"type": "R", "before": "This work is especially useful for battery-powered wireless sensor nodes in QoS-constrained applications and offers a solid framework for network design and performance analysis of heterogeneous multi-hop wireless industrial", "after": "Moreover, we show, using extensive simulation study, that the proposed bound-based power allocation performs reasonably well compared to the real optimum, especially in the case of WirelessHART", "start_char_pos": 1247, "end_char_pos": 1471}], "sents_char_pos": [0, 220, 368, 496, 751, 930, 1048, 1246]} {"doc_id": "1610.00766", "revision_depth": "1", "before_revision": "When URLanelles are degraded by autophagy , typically some, but not all, of each URLanelle type are degraded. Autophagy selectivity must not only select the correct type URLanelle, but must discriminate between URLanelles of the same kind . In the context of peroxisomes, we use computational models to explore the hypothesis that physical clustering of autophagy receptor proteins on the surface of URLanelle provides an appropriate all-or-none signal for degradation. The pexophagy receptor proteins NBR1 and p62 are well characterized, though only NBR1 is essential for pexophagy (Deosaran%DIFDELCMD < {\\em %%% et al. , 2013). Extending earlier work by addressing the initial nucleation of NBR1 clusters on individual peroxisomes, we find that larger peroxisomes nucleate NBR1 clusters first and lose them due to competitive coarseninglast, resulting in significant size-selectivity favouring large peroxisomes. This effect can explain the increased catalase signal that results from experimental siRNA inhibition of p62. We also consider receptor cluster formation on individual peroxisomes with enhanced levels of ubiquitinand find that size-selectivity, but not cluster formation , is suppressed. Thus, our selectivity mechanism does not hinder selection of individual URLanelles via enhanced ubiquitination. NBR1 cluster formation provides a viable physical mechanism for all-or-none substrate selectivity in pexophagy .", "after_revision": "Selective autophagy must not only select the correct type URLanelle, but also must discriminate between URLanelles of the same kind so that some but not all of URLanelles are removed. We propose that physical clustering of autophagy receptor proteins on URLanelle surface can provide an appropriate all-or-none signal URLanelle degradation. We explore this proposal using a computational model restricted to peroxisomes and the relatively well characterized pexophagy receptor proteins NBR1 and p62 %DIFDELCMD < {\\em %%% . We find that larger peroxisomes nucleate NBR1 clusters first and lose them last through competitive coarsening. This results in significant size-selectivity that favors large peroxisomes, and can explain the increased catalase signal that results from siRNA inhibition of p62. Excess ubiquitin, resulting from URLanelles, suppresses size-selectivity but not cluster formation . Our proposed selectivity mechanism thus allows all URLanelles to be degraded, while otherwise selecting only a portion URLanelles for degradation .", "edit_actions": [{"type": "R", "before": "When URLanelles are degraded by autophagy , typically some, but not all, of each URLanelle type are degraded. Autophagy selectivity", "after": "Selective autophagy", "start_char_pos": 0, "end_char_pos": 131}, {"type": "A", "before": null, "after": "also", "start_char_pos": 185, "end_char_pos": 185}, {"type": "R", "before": ". In the context of peroxisomes, we use computational models to explore the hypothesis", "after": "so that some but not all of URLanelles are removed. We propose", "start_char_pos": 240, "end_char_pos": 326}, {"type": "R", "before": "the surface of URLanelle provides", "after": "URLanelle surface can provide", "start_char_pos": 386, "end_char_pos": 419}, {"type": "R", "before": "for degradation. The", "after": "URLanelle degradation. We explore this proposal using a computational model restricted to peroxisomes and the relatively well characterized", "start_char_pos": 454, "end_char_pos": 474}, {"type": "D", "before": "are well characterized, though only NBR1 is essential for pexophagy (Deosaran", "after": null, "start_char_pos": 516, "end_char_pos": 593}, {"type": "D", "before": "et al.", "after": null, "start_char_pos": 615, "end_char_pos": 621}, {"type": "R", "before": ", 2013). Extending earlier work by addressing the initial nucleation of NBR1 clusters on individual peroxisomes, we", "after": ". We", "start_char_pos": 622, "end_char_pos": 737}, {"type": "R", "before": "due to competitive coarseninglast, resulting", "after": "last through competitive coarsening. This results", "start_char_pos": 810, "end_char_pos": 854}, {"type": "R", "before": "favouring large peroxisomes. This effect", "after": "that favors large peroxisomes, and", "start_char_pos": 887, "end_char_pos": 927}, {"type": "D", "before": "experimental", "after": null, "start_char_pos": 988, "end_char_pos": 1000}, {"type": "R", "before": "We also consider receptor cluster formation on individual peroxisomes with enhanced levels of ubiquitinand find that size-selectivity,", "after": "Excess ubiquitin, resulting from URLanelles, suppresses size-selectivity", "start_char_pos": 1026, "end_char_pos": 1160}, {"type": "R", "before": ", is suppressed. Thus, our selectivity mechanism does not hinder selection of individual URLanelles via enhanced ubiquitination. NBR1 cluster formation provides a viable physical mechanism for all-or-none substrate selectivity in pexophagy", "after": ". Our proposed selectivity mechanism thus allows all URLanelles to be degraded, while otherwise selecting only a portion URLanelles for degradation", "start_char_pos": 1187, "end_char_pos": 1426}], "sents_char_pos": [0, 109, 241, 470, 630, 915, 1025, 1203, 1315]} {"doc_id": "1610.04835", "revision_depth": "1", "before_revision": "A microtubule (MT) is a tubular stiff filament formed by a URLanization of tubulin proteins. We develop a stochastic kinetic model for studying the strength and stability of a pre-formed attachment of a MT with a rigid wall where the MT is tethered to the wall by a group of motor proteins. Such an attachment , formed by the specific interactions between the MT and the motors, is an analog of ligand-receptor bonds , the MT and the motors anchored on the wall being the counterparts of the ligand and receptors, respectively. However, unlike other ligands, the length of a MT can change with time because of its polymerization-depolymerization kinetics . The simple model developed here is motivated by the MTs linked to the cell cortex by dynein motors . We present the theory for both force-ramp and force-clamp conditions. In the force-ramp protocol we investigate the strength of the attachment by assuming imposition of a time-dependent external load tension that increases linearly with time till the attachment gets ruptured, we calculate the distribution of the rupture forces that fluctuates from one loading to another. In the force-clamp protocol, to test the stability, we compute the distribution of the lifetimes of the attachments under externally applied time-independent load tension, the results establish the MT-wall attachment to be an analog of a slip-bond .", "after_revision": " We develop a stochastic kinetic model of a pre-formed attachment of a mictrotuble (MT) with a cell cortex, in which the MT is tethered to the cell by a group of active motor proteins. Such an attachment is a particularly unique case of ligand-receptor bonds : The MT ligand changes its length (and thus binding sites) with time by polymerization-depolymerization kinetics , while multiple motor receptors tend to walk actively along the MT length. These processes, combined with force-mediated unbinding of the motors, result in an elaborate behavior of the MT connection to the cell cortex . We present results for the strength and lifetime of the system through the well-established force-clamp and force-ramp protocols when external tension is applied to the MT. The simulation results reveal that the MT-cell attachment behaves as a catch-bond or slip-bond depending on system parameters. We provide analytical approximations of the lifetime and discuss implications of our results on in-vitro experiments .", "edit_actions": [{"type": "D", "before": "A microtubule (MT) is a tubular stiff filament formed by a URLanization of tubulin proteins.", "after": null, "start_char_pos": 0, "end_char_pos": 92}, {"type": "D", "before": "for studying the strength and stability", "after": null, "start_char_pos": 131, "end_char_pos": 170}, {"type": "R", "before": "MT with a rigid wall where", "after": "mictrotuble (MT) with a cell cortex, in which", "start_char_pos": 203, "end_char_pos": 229}, {"type": "R", "before": "wall", "after": "cell", "start_char_pos": 256, "end_char_pos": 260}, {"type": "A", "before": null, "after": "active", "start_char_pos": 275, "end_char_pos": 275}, {"type": "R", "before": ", formed by the specific interactions between the MT and the motors, is an analog", "after": "is a particularly unique case", "start_char_pos": 311, "end_char_pos": 392}, {"type": "R", "before": ", the MT and the motors anchored on the wall being the counterparts of the ligand and receptors, respectively. However, unlike other ligands, the length of a MT can change with time because of its", "after": ": The MT ligand changes its length (and thus binding sites) with time by", "start_char_pos": 418, "end_char_pos": 614}, {"type": "R", "before": ". The simple model developed here is motivated by the MTs linked", "after": ", while multiple motor receptors tend to walk actively along the MT length. These processes, combined with force-mediated unbinding of the motors, result in an elaborate behavior of the MT connection", "start_char_pos": 656, "end_char_pos": 720}, {"type": "D", "before": "by dynein motors", "after": null, "start_char_pos": 740, "end_char_pos": 756}, {"type": "R", "before": "the theory for both force-ramp and", "after": "results for the strength and lifetime of the system through the well-established", "start_char_pos": 770, "end_char_pos": 804}, {"type": "R", "before": "conditions. In the", "after": "and", "start_char_pos": 817, "end_char_pos": 835}, {"type": "R", "before": "protocol we investigate the strength of", "after": "protocols when external tension is applied to the MT. The simulation results reveal that", "start_char_pos": 847, "end_char_pos": 886}, {"type": "R", "before": "attachment by assuming imposition of a time-dependent external load tension that increases linearly with time till the attachment gets ruptured, we calculate the distribution of the rupture forces that fluctuates from one loading to another. In the force-clamp protocol, to test the stability, we compute the distribution of the lifetimes of the attachments under externally applied time-independent load tension, the results establish the MT-wall attachment to be an analog of a", "after": "MT-cell attachment behaves as a catch-bond or", "start_char_pos": 891, "end_char_pos": 1370}, {"type": "A", "before": null, "after": "depending on system parameters. We provide analytical approximations of the lifetime and discuss implications of our results on in-vitro experiments", "start_char_pos": 1381, "end_char_pos": 1381}], "sents_char_pos": [0, 92, 291, 528, 657, 758, 828, 1132]} {"doc_id": "1612.06186", "revision_depth": "1", "before_revision": " Using the recently available World Input-Output Database , we modeled the evolving world economic network ( from 1995 to 2011 ) with a series of time-homogeneous finite Markov chains. Next, we investigated different aspects of the world economic network via different properties of the Markov chains including mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. We showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare . Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of that node and the latter is based on the number of times a specific economic node is affected by a shock in the activity of all the other nodes. Next, we showed that the sum of systemic fragility values of all the nodes as an aggregate measure of network fragility could be used as a predictive risk measure of the whole economic network. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of economic slow down of some of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow , there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of money .", "after_revision": "The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, there has not been a full investigation of evolving world economic networks with Markov chain formalism. Using the recently available world input-output database , we modeled the evolution of the world economic network from 1995 to 2011 through analysis of a series of finite Markov chains. We assessed different aspects of this evolving system via different properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies . Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network , there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average monetary flow .", "edit_actions": [{"type": "A", "before": null, "after": "The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, there has not been a full investigation of evolving world economic networks with Markov chain formalism.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "World Input-Output Database", "after": "world input-output database", "start_char_pos": 30, "end_char_pos": 57}, {"type": "R", "before": "evolving", "after": "evolution of the", "start_char_pos": 75, "end_char_pos": 83}, {"type": "D", "before": "(", "after": null, "start_char_pos": 107, "end_char_pos": 108}, {"type": "R", "before": ") with", "after": "through analysis of", "start_char_pos": 127, "end_char_pos": 133}, {"type": "D", "before": "time-homogeneous", "after": null, "start_char_pos": 146, "end_char_pos": 162}, {"type": "R", "before": "Next, we investigated", "after": "We assessed", "start_char_pos": 185, "end_char_pos": 206}, {"type": "R", "before": "the world economic network", "after": "this evolving system", "start_char_pos": 228, "end_char_pos": 254}, {"type": "R", "before": "including", "after": "such as", "start_char_pos": 301, "end_char_pos": 310}, {"type": "R", "before": "We", "after": "First, we", "start_char_pos": 422, "end_char_pos": 424}, {"type": "D", "before": "welfare", "after": null, "start_char_pos": 729, "end_char_pos": 736}, {"type": "R", "before": "that", "after": "a", "start_char_pos": 970, "end_char_pos": 974}, {"type": "R", "before": "all", "after": "any of", "start_char_pos": 1094, "end_char_pos": 1097}, {"type": "D", "before": "Next, we showed that the sum of systemic fragility values of all the nodes as an aggregate measure of network fragility could be used as a predictive risk measure of the whole economic network.", "after": null, "start_char_pos": 1115, "end_char_pos": 1308}, {"type": "R", "before": "economic slow down of some of", "after": "a change in activity levels of", "start_char_pos": 1453, "end_char_pos": 1482}, {"type": "D", "before": "world economic", "after": null, "start_char_pos": 1525, "end_char_pos": 1539}, {"type": "A", "before": null, "after": "over the network", "start_char_pos": 1671, "end_char_pos": 1671}, {"type": "R", "before": "flow of money", "after": "monetary flow", "start_char_pos": 1802, "end_char_pos": 1815}], "sents_char_pos": [0, 184, 421, 542, 738, 1114, 1308, 1548]} {"doc_id": "1612.09103", "revision_depth": "2", "before_revision": "Given two Polish spaces U and V, denote and countably generated sub \\sigma-field G\\subsetF. Denote } by L( U \\times%DIFDELCMD < } %%% V) and \\mathcal{L the set of all bounded upper semianalytic functions from U \\times%DIFDELCMD < } %%% V and U to the real line, respectively(G) the subset of \\mathcal{G}-upper semianalytic functions} . Let \\mathcal{E}(\\cdot| U )\\colon\\mathcal{L}( U \\times%DIFDELCMD < }%%% V )\\to\\mathcal{L}( U ) be a sublinear increasing functional which leaves \\mathcal{L}( U ) invariant. We prove that there exists a set-valued mapping \\mathcal{P} _V from U} from \\Omega } to the set of probabilities on V with compact convex values and analytic graph such that \\mathcal{E}(X| U)(u)(\\omega} )= \\sup_{P\\in\\mathcal{P} _V ( u )} \\int_V X(u,v)\\,P(dv) if and only if \\mathcal{E}(\\cdot | U ) is pointwise continuous from below and continuous from above on the continuous functions. Further, given another sublinear increasing functional \\mathcal{E}(\\cdot)\\colon\\mathcal{L}( U \\times%DIFDELCMD < } %%% V )\\to\\mathbb{R} which leaves the constants invariant, the tower property \\mathcal{E}(\\cdot)=\\mathcal{E}(\\mathcal{E}(\\cdot| U )) is characterized via a pasting property of the representing sets of probabilities . As applications, we characterize under which conditions the product of a set of probabilities and a set of kernels is compact , and under which conditions a nonlinear version of Fubini's theorem holds true .", "after_revision": "Let \\Omega be a Polish space with Borel \\sigma-field \\mathcal{F and countably generated sub \\sigma-field G\\subsetF. Denote } by L( %DIFDELCMD < } %%% \\mathcal{F the set of all bounded \\mathcal{F semianalytic functions from %DIFDELCMD < } %%% \\Omega to the reals and by \\mathcal{L(G) the subset of \\mathcal{G}-upper semianalytic functions} . Let \\mathcal{E}(\\cdot| \\mathcal{G )\\colon\\mathcal{L}( %DIFDELCMD < }%%% \\mathcal{F )\\to\\mathcal{L}( \\mathcal{G ) be a sublinear increasing functional which leaves \\mathcal{L}( \\mathcal{G ) invariant. It is shown that there exists a \\mathcal{G set-valued mapping \\mathcal{P} _{\\mathcal{G} from \\Omega } to the set of probabilities which are concentrated on atoms of \\mathcal{G with compact convex values such that \\mathcal{E}(X| \\mathcal{G)(\\omega} )= \\sup_{P\\in\\mathcal{P} _{\\mathcal{G ( \\omega )} E_P[X] if and only if \\mathcal{E}(\\cdot | \\mathcal{G ) is pointwise continuous from below and continuous from above on the continuous functions. Further, given another sublinear increasing functional \\mathcal{E}(\\cdot)\\colon\\mathcal{L}( %DIFDELCMD < } %%% \\mathcal{F )\\to\\mathbb{R} which leaves the constants invariant, the tower property \\mathcal{E}(\\cdot)=\\mathcal{E}(\\mathcal{E}(\\cdot| \\mathcal{G )) is characterized via a pasting property of the representing sets of probabilities , and the importance of analytic functions is explained. Finally, it is characterized when a nonlinear version of Fubini's theorem holds true and when the product of a set of probabilities and a set of kernels is compact .", "edit_actions": [{"type": "R", "before": "Given two Polish spaces U and V, denote", "after": "Let \\Omega be a Polish space with Borel \\sigma-field \\mathcal{F", "start_char_pos": 0, "end_char_pos": 39}, {"type": "D", "before": "U", "after": null, "start_char_pos": 107, "end_char_pos": 108}, {"type": "D", "before": "\\times", "after": null, "start_char_pos": 109, "end_char_pos": 115}, {"type": "R", "before": "V) and \\mathcal{L", "after": "\\mathcal{F", "start_char_pos": 134, "end_char_pos": 151}, {"type": "R", "before": "upper", "after": "\\mathcal{F", "start_char_pos": 175, "end_char_pos": 180}, {"type": "D", "before": "U", "after": null, "start_char_pos": 209, "end_char_pos": 210}, {"type": "D", "before": "\\times", "after": null, "start_char_pos": 211, "end_char_pos": 217}, {"type": "R", "before": "V and U to the real line, respectively", "after": "\\Omega to the reals and by \\mathcal{L", "start_char_pos": 236, "end_char_pos": 274}, {"type": "R", "before": "U", "after": "\\mathcal{G", "start_char_pos": 359, "end_char_pos": 360}, {"type": "D", "before": "U", "after": null, "start_char_pos": 381, "end_char_pos": 382}, {"type": "D", "before": "\\times", "after": null, "start_char_pos": 383, "end_char_pos": 389}, {"type": "R", "before": "V", "after": "\\mathcal{F", "start_char_pos": 407, "end_char_pos": 408}, {"type": "R", "before": "U", "after": "\\mathcal{G", "start_char_pos": 426, "end_char_pos": 427}, {"type": "R", "before": "U", "after": "\\mathcal{G", "start_char_pos": 493, "end_char_pos": 494}, {"type": "R", "before": "We prove", "after": "It is shown", "start_char_pos": 508, "end_char_pos": 516}, {"type": "A", "before": null, "after": "\\mathcal{G", "start_char_pos": 537, "end_char_pos": 537}, {"type": "R", "before": "_V from U", "after": "_{\\mathcal{G", "start_char_pos": 569, "end_char_pos": 578}, {"type": "R", "before": "on V", "after": "which are concentrated on atoms of \\mathcal{G", "start_char_pos": 622, "end_char_pos": 626}, {"type": "D", "before": "and analytic graph", "after": null, "start_char_pos": 654, "end_char_pos": 672}, {"type": "R", "before": "U)(u", "after": "\\mathcal{G", "start_char_pos": 698, "end_char_pos": 702}, {"type": "R", "before": "_V", "after": "_{\\mathcal{G", "start_char_pos": 737, "end_char_pos": 739}, {"type": "R", "before": "u", "after": "\\omega", "start_char_pos": 742, "end_char_pos": 743}, {"type": "R", "before": "\\int_V X(u,v)\\,P(dv)", "after": "E_P[X]", "start_char_pos": 747, "end_char_pos": 767}, {"type": "R", "before": "U", "after": "\\mathcal{G", "start_char_pos": 803, "end_char_pos": 804}, {"type": "D", "before": "U", "after": null, "start_char_pos": 989, "end_char_pos": 990}, {"type": "D", "before": "\\times", "after": null, "start_char_pos": 991, "end_char_pos": 997}, {"type": "R", "before": "V", "after": "\\mathcal{F", "start_char_pos": 1016, "end_char_pos": 1017}, {"type": "R", "before": "U", "after": "\\mathcal{G", "start_char_pos": 1140, "end_char_pos": 1141}, {"type": "R", "before": ". As applications, we characterize under which conditions", "after": ", and the importance of analytic functions is explained. Finally, it is characterized when a nonlinear version of Fubini's theorem holds true and when", "start_char_pos": 1227, "end_char_pos": 1284}, {"type": "D", "before": ", and under which conditions a nonlinear version of Fubini's theorem holds true", "after": null, "start_char_pos": 1355, "end_char_pos": 1434}], "sents_char_pos": [0, 91, 507, 896, 1228]} {"doc_id": "1701.07712", "revision_depth": "1", "before_revision": " In this work we use a combination of statistical physics and dynamical systems approaches , to analyze the response to an antigen of a simplified model of the adaptive immune system, which comprises B, T helper and T regulatory lymphocytes. Results show that the model is remarkably robust against changes in the kinetic parameters, noise levels, and mechanisms that activate T regulatory lymphocytes. In contrast, the model is extremely sensitive to changes in the ratio between T helper and T regulatory lymphocytes, exhibiting in particular a phase transition , from a responsive to an immuno-suppressed phase, when the ratio is lowered below a critical value . To the best of our knowledge, this is the first mathematical study to support the validity of the T-helper/ T-regulator lymphocyte ratio as an index of immunosuppression , with a potential for prognostic monitoring .", "after_revision": "Recent experimental studies have suggested the ratio between T-helper and T-suppressor lymphocytes as an index of immunosuppression in HIV, cancer, immunosenescence and inflammatory and auto-immune diseases. However, a quantitative understanding of the impact of this ratio on the immune response has lagged behind data and its validity as a tool for prognostic monitoring or therapeutic target remains an open question. In this work , we use statistical physics and dynamical systems approaches to analyze the time-dependent response to an antigen , of a simplified model of the adaptive immune system, which comprises B, T-helper and T-suppressor lymphocytes. The model is remarkably robust against changes in the noise level and kinetic parameters, but it is very sensitive to changes in the ratio between T-helper and T-suppressor lymphocytes, exhibiting , in particular, a transition from a responsive to an immuno-suppressed phase, as the ratio is lowered below a critical value , which is in line with experiments. This result supports the validity of the T-helper/ T-suppressor ratio as an index of immunosuppression and may provide a useful theoretical benchmark to interpret and compare experiments .", "edit_actions": [{"type": "A", "before": null, "after": "Recent experimental studies have suggested the ratio between T-helper and T-suppressor lymphocytes as an index of immunosuppression in HIV, cancer, immunosenescence and inflammatory and auto-immune diseases. However, a quantitative understanding of the impact of this ratio on the immune response has lagged behind data and its validity as a tool for prognostic monitoring or therapeutic target remains an open question.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "we use a combination of", "after": ", we use", "start_char_pos": 14, "end_char_pos": 37}, {"type": "D", "before": ",", "after": null, "start_char_pos": 91, "end_char_pos": 92}, {"type": "A", "before": null, "after": "time-dependent", "start_char_pos": 108, "end_char_pos": 108}, {"type": "A", "before": null, "after": ",", "start_char_pos": 132, "end_char_pos": 132}, {"type": "R", "before": "T helper and T regulatory lymphocytes. Results show that the", "after": "T-helper and T-suppressor lymphocytes. The", "start_char_pos": 205, "end_char_pos": 265}, {"type": "A", "before": null, "after": "noise level and", "start_char_pos": 316, "end_char_pos": 316}, {"type": "R", "before": "noise levels, and mechanisms that activate T regulatory lymphocytes. In contrast, the model is extremely", "after": "but it is very", "start_char_pos": 337, "end_char_pos": 441}, {"type": "R", "before": "T helper and T regulatory", "after": "T-helper and T-suppressor", "start_char_pos": 484, "end_char_pos": 509}, {"type": "R", "before": "in particular a phase transition ,", "after": ", in particular, a transition", "start_char_pos": 534, "end_char_pos": 568}, {"type": "R", "before": "when", "after": "as", "start_char_pos": 618, "end_char_pos": 622}, {"type": "R", "before": ". To the best of our knowledge, this is the first mathematical study to support the", "after": ", which is in line with experiments. This result supports the", "start_char_pos": 667, "end_char_pos": 750}, {"type": "R", "before": "T-regulator lymphocyte", "after": "T-suppressor", "start_char_pos": 777, "end_char_pos": 799}, {"type": "R", "before": ", with a potential for prognostic monitoring", "after": "and may provide a useful theoretical benchmark to interpret and compare experiments", "start_char_pos": 839, "end_char_pos": 883}], "sents_char_pos": [0, 243, 405, 668]} {"doc_id": "1702.01265", "revision_depth": "1", "before_revision": "Background: The correct positioning of the mitotic spindle during the asymmetric division of the nematode C. elegans zygote relies on the combination of centering and corticalpulling forces. These forces, revealed by centrosome anaphase oscillations, are regulated through the dynamics of force generators, related to mitosis progression. Recently, we reported the control of oscillation onset by the posterior spindle pole position in related speciesC. briggsae, necessitating a re-evaluation of the role of astral microtubules dynamics. Results: After exhibiting such a positional switch in C. elegans, we mapped the microtubule ends at the cortex and observed a correlation between the proximity of the centrosomes and the density of microtubule contacts. To explore the functional consequences, we extended the tug-of-war model and successfully accounted for the positional switch. We predicted and experimentally validated that the control of oscillation onset was robust to changes in cell geometry or maximum number of attached force generators. We also predicted that the final position of the posterior centrosome and thus the spindle has a reduced dependence upon the force generator dynamics or number . Conclusion: The outburst of forces responsible of spindle anaphase oscillations and positioning is regulated by the spindle position through the spatial modulation of microtubule contactsat the cortex . This regulation superimposes that of force generator processivity putatively linked to the cell cycle. This novel control provides robustness to variations in zygote geometry or detailed properties of cortical force generators .", "after_revision": "Background: During asymmetric division of the Caenorhabditis elegans nematode zygote, the polarity cues distribution and daughter cell fates depend on the correct positioning of the mitotic spindle which results from both centering and cortical pulling forces. Revealed by spindle rocking, these pulling forces are regulated by the force generator dynamics, which are related to mitosis progression. This may be combined with a second regulation, this one by the posterior spindle pole position , which can be seen when comparing related species. Results: After delaying anaphase onset, we identified a positional pulling force regulation in C. elegans, which we ascribed to microtubule dynamics at the cortex . Indeed, in mapping the contacts we found a correlation between the centrosome-cortex distance and the microtubule contact density. This density in turn modulates pulling force generator activity. We expanded our model of spindle rocking and predicted then experimentally validated that the oscillation onset position resists changes in cellular geometry and number of force generators. Consistent with final spindle position measurements, this new model accounts for a lower dependence on force generator dynamics and quantities than predicted by the previous model . Conclusion: The spindle position regulates the rapid increase in forces needed for anaphase oscillation and positioning through the spatial modulation of microtubule-cortex contacts . This regulation superimposes that of force generator processivity , putatively linked to the cell cycle. This novel control confers resistance to variations in zygote geometry and dynamics of cortical force generators . Interestingly, this robustness originates in cell mechanics rather than biochemical networks .", "edit_actions": [{"type": "R", "before": "The correct positioning of the mitotic spindle during the", "after": "During", "start_char_pos": 12, "end_char_pos": 69}, {"type": "R", "before": "nematode C. elegans zygote relies on the combination of centering and corticalpulling forces. These forces, revealed by centrosome anaphase oscillations, are regulated through the dynamics of force generators,", "after": "Caenorhabditis elegans nematode zygote, the polarity cues distribution and daughter cell fates depend on the correct positioning of the mitotic spindle which results from both centering and cortical pulling forces. Revealed by spindle rocking, these pulling forces are regulated by the force generator dynamics, which are", "start_char_pos": 97, "end_char_pos": 306}, {"type": "R", "before": "Recently, we reported the control of oscillation onset", "after": "This may be combined with a second regulation, this one", "start_char_pos": 339, "end_char_pos": 393}, {"type": "R", "before": "in related speciesC. briggsae, necessitating a re-evaluation of the role of astral microtubules dynamics.", "after": ", which can be seen when comparing related species.", "start_char_pos": 433, "end_char_pos": 538}, {"type": "R", "before": "exhibiting such a positional switch", "after": "delaying anaphase onset, we identified a positional pulling force regulation", "start_char_pos": 554, "end_char_pos": 589}, {"type": "R", "before": "we mapped the microtubule ends", "after": "which we ascribed to microtubule dynamics", "start_char_pos": 605, "end_char_pos": 635}, {"type": "R", "before": "and observed", "after": ". Indeed, in mapping the contacts we found", "start_char_pos": 650, "end_char_pos": 662}, {"type": "R", "before": "proximity of the centrosomes and the density of microtubule contacts. To explore the functional consequences, we extended the tug-of-war model and successfully accounted for the positional switch. We predicted and", "after": "centrosome-cortex distance and the microtubule contact density. This density in turn modulates pulling force generator activity. We expanded our model of spindle rocking and predicted then", "start_char_pos": 689, "end_char_pos": 902}, {"type": "R", "before": "control of oscillation onset was robust to changes in cell geometry or maximum number of attached", "after": "oscillation onset position resists changes in cellular geometry and number of", "start_char_pos": 937, "end_char_pos": 1034}, {"type": "R", "before": "We also predicted that the final position of the posterior centrosome and thus the spindle has a reduced dependence upon the", "after": "Consistent with final spindle position measurements, this new model accounts for a lower dependence on", "start_char_pos": 1053, "end_char_pos": 1177}, {"type": "R", "before": "or number", "after": "and quantities than predicted by the previous model", "start_char_pos": 1203, "end_char_pos": 1212}, {"type": "R", "before": "outburst of forces responsible of spindle anaphase oscillations and positioning is regulated by the spindle position", "after": "spindle position regulates the rapid increase in forces needed for anaphase oscillation and positioning", "start_char_pos": 1231, "end_char_pos": 1347}, {"type": "R", "before": "microtubule contactsat the cortex", "after": "microtubule-cortex contacts", "start_char_pos": 1382, "end_char_pos": 1415}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1484, "end_char_pos": 1484}, {"type": "R", "before": "provides robustness", "after": "confers resistance", "start_char_pos": 1541, "end_char_pos": 1560}, {"type": "R", "before": "or detailed properties", "after": "and dynamics", "start_char_pos": 1594, "end_char_pos": 1616}, {"type": "A", "before": null, "after": ". Interestingly, this robustness originates in cell mechanics rather than biochemical networks", "start_char_pos": 1646, "end_char_pos": 1646}], "sents_char_pos": [0, 190, 338, 538, 758, 885, 1052, 1417, 1521]} {"doc_id": "1702.03098", "revision_depth": "1", "before_revision": "Determining risk contributions by unit exposures to portfolio-wide economic capital is an important task in financial risk management. Despite its practical demands, computation of risk contributions is challenging for most risk models because it often requires rare-event simulation . In this paper , we address the problem of estimating risk contributions when the total risk is measured by Value-at-Risk (VaR). We propose a new estimator of VaR contributions , that utilizes Markov chain Monte Carlo (MCMC) method. Unlike the existing estimators, our MCMC-based estimator is computed by samples of conditional loss distribution given the rare event of our interest. MCMC method enables to generate such samples without evaluating the density of total risk. Thanks to these features, our estimator has improved sample-efficiency compared with the crude Monte Carlo method. Moreover, our method is widely applicable to various risk models specified by joint portfolio loss density. In this paper, we show that our MCMC-based estimator has several attractive properties, such as consistency and asymptotic normality. Our numerical experiment also demonstrates that , in various risk models used in practice, our MCMC estimator has smaller bias and MSE compared with these of existing estimators.", "after_revision": "Determining risk contributions of unit exposures to portfolio-wide economic capital is an important task in financial risk management. Computing risk contributions involves difficulties caused by rare-event simulations . In this study , we address the problem of estimating risk contributions when the total risk is measured by value-at-risk (VaR). Our proposed estimator of VaR contributions is based on the Metropolis-Hasting (MH) algorithm, which is one of the most prevalent Markov chain Monte Carlo (MCMC) methods. Unlike existing estimators, our MH-based estimator consists of samples from conditional loss distribution given a rare event of interest. This feature enhances sample efficiency compared with the crude Monte Carlo method. Moreover, our method has the consistency and asymptotic normality, and is widely applicable to various risk models having joint loss density. Our numerical experiments based on simulation and real-world data demonstrate that in various risk models , even those having high-dimensional (approximately 500) inhomogeneous margins, our MH estimator has smaller bias and mean squared error compared with existing estimators.", "edit_actions": [{"type": "R", "before": "by", "after": "of", "start_char_pos": 31, "end_char_pos": 33}, {"type": "R", "before": "Despite its practical demands, computation of risk contributions is challenging for most risk models because it often requires", "after": "Computing risk contributions involves difficulties caused by", "start_char_pos": 135, "end_char_pos": 261}, {"type": "R", "before": "simulation", "after": "simulations", "start_char_pos": 273, "end_char_pos": 283}, {"type": "R", "before": "paper", "after": "study", "start_char_pos": 294, "end_char_pos": 299}, {"type": "R", "before": "Value-at-Risk", "after": "value-at-risk", "start_char_pos": 393, "end_char_pos": 406}, {"type": "R", "before": "We propose a new", "after": "Our proposed", "start_char_pos": 414, "end_char_pos": 430}, {"type": "R", "before": ", that utilizes", "after": "is based on the Metropolis-Hasting (MH) algorithm, which is one of the most prevalent", "start_char_pos": 462, "end_char_pos": 477}, {"type": "R", "before": "method. Unlike the", "after": "methods. Unlike", "start_char_pos": 510, "end_char_pos": 528}, {"type": "R", "before": "MCMC-based estimator is computed by samples of", "after": "MH-based estimator consists of samples from", "start_char_pos": 554, "end_char_pos": 600}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 637, "end_char_pos": 640}, {"type": "R", "before": "our interest. MCMC method enables to generate such samples without evaluating the density of total risk. Thanks to these features, our estimator has improved sample-efficiency", "after": "interest. This feature enhances sample efficiency", "start_char_pos": 655, "end_char_pos": 830}, {"type": "A", "before": null, "after": "has the consistency and asymptotic normality, and", "start_char_pos": 896, "end_char_pos": 896}, {"type": "R", "before": "specified by joint portfolio", "after": "having joint", "start_char_pos": 941, "end_char_pos": 969}, {"type": "R", "before": "In this paper, we show that our MCMC-based estimator has several attractive properties, such as consistency and asymptotic normality. Our numerical experiment also demonstrates that ,", "after": "Our numerical experiments based on simulation and real-world data demonstrate that", "start_char_pos": 984, "end_char_pos": 1167}, {"type": "R", "before": "used in practice, our MCMC", "after": ", even those having high-dimensional (approximately 500) inhomogeneous margins, our MH", "start_char_pos": 1191, "end_char_pos": 1217}, {"type": "R", "before": "MSE compared with these of", "after": "mean squared error compared with", "start_char_pos": 1249, "end_char_pos": 1275}], "sents_char_pos": [0, 134, 285, 413, 517, 668, 759, 874, 983, 1117]} {"doc_id": "1704.07580", "revision_depth": "1", "before_revision": "Motivated by map labeling, we study the problem in which ] we are given a collection of n disks D_1,%DIFDELCMD < \\dots%%% , D_n in the plane that grow at possibly different speeds. Whenever two disks meet , the one with the lower index disappears. This problem was introduced by Funke, Krumpe, and Storandt IWOCA 2016 . We provide the first general subquadratic algorithm for computing the times and the order of disappearance. This algorithm also works for other shapes ( such as rectangles)and in any fixed dimension. Using quadtrees, we provide an alternative algorithm that runs in near linear time , although this second algorithm has a logarithmic dependence on either\\big(\\big(\\big)\\big) the ratio of the fastest speed to the slowest speed of disks or the spread of disk centers (the ratio of the maximum to the minimum distance between them). Our result improves the running times of previous algorithms by Funke, Krumpe, and Storandt [IWOCA 2016], Bahrdt et al. [ALENEX 2017] and Funke and Storandt [ EWCG 2017]. Finally, we give an \\Omega(n\\log n) lower bound on the problem , showing that our quadtree algorithms are almost tight.", "after_revision": "Motivated by map labeling, Funke, Krumpe, and Storandt IWOCA 2016] introduced the following problem: we are given a sequence of n disks %DIFDELCMD < \\dots%%% in the plane . Initially, all disks have radius 0, and they grow at constant, but possibly different, speeds. Whenever two disks touch , the one with the higher index disappears. The goal is to determine the elimination order, i.e., the order in which the disks disappear . We provide the first general subquadratic algorithm for this problem. Our solution extends to other shapes ( e.g., rectangles), and it works in any fixed dimension. We also describe an alternative algorithm that is based on quadtrees. Its running time is O\\big(n\\big(\\log n + \\min \\{ \\log \\Delta, \\log \\Phi \\}\\big)\\big), where \\Delta is the ratio of the fastest and the slowest growth rate and \\Phi is the ratio of the largest and the smallest distance between two disk centers. This improves the running times of previous algorithms by Funke, Krumpe, and Storandt [IWOCA 2016], Bahrdt et al. [ALENEX 2017] , and Funke and Storandt [ EuroCG 2017]. Finally, we give an \\Omega(n\\log n) lower bound , showing that our quadtree algorithms are almost tight.", "edit_actions": [{"type": "R", "before": "we study the problem in which", "after": "Funke, Krumpe, and Storandt", "start_char_pos": 27, "end_char_pos": 56}, {"type": "A", "before": null, "after": "IWOCA 2016", "start_char_pos": 57, "end_char_pos": 57}, {"type": "A", "before": null, "after": "introduced the following problem:", "start_char_pos": 59, "end_char_pos": 59}, {"type": "R", "before": "collection", "after": "sequence", "start_char_pos": 75, "end_char_pos": 85}, {"type": "D", "before": "D_1,", "after": null, "start_char_pos": 97, "end_char_pos": 101}, {"type": "D", "before": ", D_n", "after": null, "start_char_pos": 123, "end_char_pos": 128}, {"type": "R", "before": "that grow at possibly different", "after": ". Initially, all disks have radius 0, and they grow at constant, but possibly different,", "start_char_pos": 142, "end_char_pos": 173}, {"type": "R", "before": "meet", "after": "touch", "start_char_pos": 201, "end_char_pos": 205}, {"type": "R", "before": "lower", "after": "higher", "start_char_pos": 225, "end_char_pos": 230}, {"type": "D", "before": "This problem was introduced by Funke, Krumpe, and Storandt", "after": null, "start_char_pos": 249, "end_char_pos": 307}, {"type": "R", "before": "IWOCA 2016", "after": "The goal is to determine the elimination order, i.e., the order in which the disks disappear", "start_char_pos": 308, "end_char_pos": 318}, {"type": "R", "before": "computing the times and the order of disappearance. This algorithm also works for", "after": "this problem. Our solution extends to", "start_char_pos": 377, "end_char_pos": 458}, {"type": "R", "before": "such as rectangles)and", "after": "e.g., rectangles), and it works", "start_char_pos": 474, "end_char_pos": 496}, {"type": "R", "before": "Using quadtrees, we provide", "after": "We also describe", "start_char_pos": 521, "end_char_pos": 548}, {"type": "R", "before": "runs in near linear time , although this second algorithm has a logarithmic dependence on either", "after": "is based on quadtrees. Its running time is O", "start_char_pos": 579, "end_char_pos": 675}, {"type": "A", "before": null, "after": "n", "start_char_pos": 680, "end_char_pos": 680}, {"type": "A", "before": null, "after": "\\log n + \\min \\{ \\log \\Delta, \\log \\Phi \\}", "start_char_pos": 685, "end_char_pos": 685}, {"type": "A", "before": null, "after": ", where \\Delta is", "start_char_pos": 695, "end_char_pos": 695}, {"type": "R", "before": "speed to the slowest speed of disks or the spread of disk centers (the", "after": "and the slowest growth rate and \\Phi is the", "start_char_pos": 721, "end_char_pos": 791}, {"type": "R", "before": "maximum to the minimum distance between them). Our result", "after": "largest and the smallest distance between two disk centers. This", "start_char_pos": 805, "end_char_pos": 862}, {"type": "A", "before": null, "after": ",", "start_char_pos": 986, "end_char_pos": 986}, {"type": "R", "before": "EWCG", "after": "EuroCG", "start_char_pos": 1012, "end_char_pos": 1016}, {"type": "D", "before": "on the problem", "after": null, "start_char_pos": 1072, "end_char_pos": 1086}], "sents_char_pos": [0, 181, 248, 428, 520, 851, 971, 1023]} {"doc_id": "1704.08219", "revision_depth": "1", "before_revision": "We devise dynamic algorithms for the following (weak ) visibility polygon computation problems: * Maintaining visibility polygon of a fixed point located interior to simple polygon amid vertex insertions and deletions to simple polygon. * Answering visibility polygon query corresponding to any point located exterior to simple polygon amid vertex insertions and deletions to simple polygon. * Maintaining weak visibility polygon of a fixed line segment located interior to simple polygon amid vertex insertions to simple polygon . * Answering weak visibility polygon query corresponding to any line segment located interior to simple polygon amid both vertex insertions and deletions to simple polygon. * Maintaining visibility polygon of a fixed point located in the free space of the polygonal domain amid vertex insertions to simple polygons in that polygonal domain. The proposed algorithms are output-sensitive, and the time complexities of algorithms for (weak) visibility polygon maintenance are expressed in terms of change in output complexity .", "after_revision": "We devise the following dynamic algorithms for both maintaining as well as querying for the visibility and weak visibility polygons amid vertex insertions and/or deletions to the simple polygon. * A fully-dynamic algorithm for maintaining the visibility polygon of a fixed point located interior to the simple polygon amid vertex insertions and deletions to the simple polygon. The time complexity to update the visibility polygon of a point q due to the insertion (resp. deletion) of vertex v to (resp. from) the current simple polygon is expressed in terms of the number of combinatorial changes needed to the visibility polygon of q due to the insertion (resp. deletion) of v. * An output-sensitive query algorithm to answer the visibility polygon query corresponding to any point p in \\mathbb{R amid vertex insertions and deletions to the simple polygon. If p is not exterior to the current simple polygon then the visibility polygon of p is computed. Otherwise, our algorithm outputs the visibility polygon corresponding to the exterior visibility of p . * An output-sensitive algorithm to compute the weak visibility polygon corresponding to any query line segment located interior to the simple polygon amid both the vertex insertions and deletions to the simple polygon. Each of these algorithms require preprocessing the initial simple polygon. And, the algorithms that maintain the visibility polygon (resp. weak visibility polygon) compute the visibility polygon (resp. weak visibility polygon) with respect to the initial simple polygon during the preprocessing phase .", "edit_actions": [{"type": "A", "before": null, "after": "the following", "start_char_pos": 10, "end_char_pos": 10}, {"type": "R", "before": "the following (weak ) visibility polygon computation problems:", "after": "both maintaining as well as querying for the visibility and weak visibility polygons amid vertex insertions and/or deletions to the simple polygon.", "start_char_pos": 34, "end_char_pos": 96}, {"type": "R", "before": "Maintaining", "after": "A fully-dynamic algorithm for maintaining the", "start_char_pos": 99, "end_char_pos": 110}, {"type": "A", "before": null, "after": "the", "start_char_pos": 167, "end_char_pos": 167}, {"type": "A", "before": null, "after": "the", "start_char_pos": 223, "end_char_pos": 223}, {"type": "A", "before": null, "after": "The time complexity to update the visibility polygon of a point q due to the insertion (resp. deletion) of vertex v to (resp. from) the current simple polygon is expressed in terms of the number of combinatorial changes needed to the visibility polygon of q due to the insertion (resp. deletion) of v.", "start_char_pos": 240, "end_char_pos": 240}, {"type": "R", "before": "Answering", "after": "An output-sensitive query algorithm to answer the", "start_char_pos": 243, "end_char_pos": 252}, {"type": "R", "before": "located exterior to simple polygon", "after": "p in \\mathbb{R", "start_char_pos": 305, "end_char_pos": 339}, {"type": "A", "before": null, "after": "the", "start_char_pos": 380, "end_char_pos": 380}, {"type": "R", "before": "* Maintaining weak", "after": "If p is not exterior to the current simple polygon then the", "start_char_pos": 397, "end_char_pos": 415}, {"type": "R", "before": "a fixed line segment located interior to simple polygon amid vertex insertions to simple polygon", "after": "p is computed. Otherwise, our algorithm outputs the visibility polygon corresponding to the exterior visibility of p", "start_char_pos": 438, "end_char_pos": 534}, {"type": "R", "before": "Answering", "after": "An output-sensitive algorithm to compute the", "start_char_pos": 539, "end_char_pos": 548}, {"type": "D", "before": "query", "after": null, "start_char_pos": 573, "end_char_pos": 578}, {"type": "A", "before": null, "after": "query", "start_char_pos": 600, "end_char_pos": 600}, {"type": "A", "before": null, "after": "the", "start_char_pos": 634, "end_char_pos": 634}, {"type": "A", "before": null, "after": "the", "start_char_pos": 660, "end_char_pos": 660}, {"type": "A", "before": null, "after": "the", "start_char_pos": 696, "end_char_pos": 696}, {"type": "R", "before": "* Maintaining visibility polygon of a fixed point located in the free space of the polygonal domain amid vertex insertions to simple polygons in that polygonal domain. The proposed algorithms are output-sensitive, and the time complexities of algorithms for (weak) visibility polygon maintenance are expressed in terms of change in output complexity", "after": "Each of these algorithms require preprocessing the initial simple polygon. And, the algorithms that maintain the visibility polygon (resp. weak visibility polygon) compute the visibility polygon (resp. weak visibility polygon) with respect to the initial simple polygon during the preprocessing phase", "start_char_pos": 713, "end_char_pos": 1062}], "sents_char_pos": [0, 239, 396, 712, 880]} {"doc_id": "1705.00924", "revision_depth": "1", "before_revision": "In the circle packing problem for triangular containers , one asks whether a given set of circles can be packed into a given triangle . Packing problems like this have been shown to be NP-hard. In this paper, we present a new sufficient condition for packing circles into any right or obtuse triangle using only the circles' combined area: It is possible to pack any circle instance whose combined area does not exceed the triangle's incircle . This area condition is tight, in the sense that for any larger area , there are instances which cannot be packed. A similar result for square containers has been established earlier this year, using the versatile, divide-and-conquer-based Split Packing algorithm. In this paper, we present a generalized, weighted version of this approach, allowing us to construct packings of circles into asymmetric triangles. It seems crucial to the success of these results that Split Packingdoes not depend on an orthogonal subdivision structure. Beside realizing all packings below the critical density bound, our algorithm can also be used as a constant-factor approximation algorithm when looking for the smallest non-acute triangle of a given side ratio in which a given set of circles can be packed . An interactive visualization of the Split Packing approach and other related material can be found at URL", "after_revision": "In the classic circle packing problem , one asks whether a given set of circles can be packed into a given container . Packing problems like this have been shown to be NP-hard. In this paper, we present new sufficient conditions for packing circles into square and triangular containers, using only the sum of the circles' areas: For square containers, it is possible to pack any set of circles with a combined area of up to approximately 53.90\\% of the square's area. And when the container is a right or obtuse triangle, any set of circles whose combined area does not exceed the triangle's incircle can be packed. These area conditions are tight, in the sense that for any larger areas , there are sets of circles which cannot be packed. Similar results have long been known for squares, but to the best of our knowledge, we give the first results of this type for circular objects. Our proofs are constructive: We describe a versatile, divide-and-conquer-based algorithm for packing circles into various container shapes with optimal worst-case density. It employs an elegant subdivision scheme that recursively splits the circles into two groups and then packs these into subcontainers. We call this algorithm \"Split Packing\". It can be used as a constant-factor approximation algorithm when looking for the smallest container in which a given set of circles can be packed , due to its polynomial runtime. A browser-based, interactive visualization of the Split Packing approach and other related material can be found at URL", "edit_actions": [{"type": "A", "before": null, "after": "classic", "start_char_pos": 7, "end_char_pos": 7}, {"type": "D", "before": "for triangular containers", "after": null, "start_char_pos": 31, "end_char_pos": 56}, {"type": "R", "before": "triangle", "after": "container", "start_char_pos": 126, "end_char_pos": 134}, {"type": "R", "before": "a new sufficient condition", "after": "new sufficient conditions", "start_char_pos": 221, "end_char_pos": 247}, {"type": "R", "before": "any right or obtuse triangle", "after": "square and triangular containers,", "start_char_pos": 273, "end_char_pos": 301}, {"type": "R", "before": "circles' combined area: It", "after": "sum of the circles' areas: For square containers, it", "start_char_pos": 317, "end_char_pos": 343}, {"type": "R", "before": "circle instance", "after": "set of circles with a combined area of up to approximately 53.90\\% of the square's area. And when the container is a right or obtuse triangle, any set of circles", "start_char_pos": 368, "end_char_pos": 383}, {"type": "R", "before": ". This area condition is", "after": "can be packed. These area conditions are", "start_char_pos": 444, "end_char_pos": 468}, {"type": "R", "before": "area", "after": "areas", "start_char_pos": 509, "end_char_pos": 513}, {"type": "R", "before": "instances", "after": "sets of circles", "start_char_pos": 526, "end_char_pos": 535}, {"type": "R", "before": "A similar result for square containers has been established earlier this year, using the versatile,", "after": "Similar results have long been known for squares, but to the best of our knowledge, we give the first results of this type for circular objects. Our proofs are constructive: We describe a versatile,", "start_char_pos": 560, "end_char_pos": 659}, {"type": "R", "before": "Split Packing algorithm. In this paper, we present a generalized, weighted version of this approach, allowing us to construct packings of circles into asymmetric triangles. It seems crucial to the success of these results that Split Packingdoes not depend on an orthogonal subdivision structure. Beside realizing all packings below the critical density bound, our algorithm can also", "after": "algorithm for packing circles into various container shapes with optimal worst-case density. It employs an elegant subdivision scheme that recursively splits the circles into two groups and then packs these into subcontainers. We call this algorithm \"Split Packing\". It can", "start_char_pos": 685, "end_char_pos": 1067}, {"type": "R", "before": "non-acute triangle of a given side ratio", "after": "container", "start_char_pos": 1151, "end_char_pos": 1191}, {"type": "R", "before": ". An", "after": ", due to its polynomial runtime. A browser-based,", "start_char_pos": 1238, "end_char_pos": 1242}], "sents_char_pos": [0, 194, 445, 559, 709, 857, 980, 1239]} {"doc_id": "1705.06479", "revision_depth": "2", "before_revision": "We investigate the dynamical effects of non-Gaussian asymmetric stable L\\'evy noisy fluctuations on the evolution of the transcription factor activator in the kinetic concentration model of a single genetic regulation system. The noisy fluctuations arise from the determination of the synthesis reaction rate , and are modeled by asymmetric stable L\\'evy motions . We compute two deterministic quantities, the mean first exit time (MFET) and the first escape probability (FEP), to examine the transition events from low to high concentration state under the influence of L\\'evy noise.We find that the fluctuations of the synthesis reaction rate lead to peculiar transitions to higher concentrations (i.e., higher likelihood for transcriptions) , such as higher likelihood of transcription for larger positive skewness parameter (asymmetry index) \\beta, achieving the minimum likelihood of transcription by tuning skewness parameter \\beta, achieving the higher likelihood of transcription by tuning non-Gaussianity index \\alpha, and a bifurcation for transcription at the critical value \\alpha = 1. We conduct a series of numerical experiments about `regulating' the likelihood of gene transcription by tuning asymmetric stable L\\'evy noise parameters .", "after_revision": "We investigate the dynamical effects of non-Gaussian asymmetric stable L\\'evy fluctuations on the evolution of the transcription factor activator in a genetic regulation system. The noisy fluctuations arise from the synthesis reaction rate . We compute two deterministic quantities, the mean first exit time (MFET) and the first escape probability (FEP), in order to examine the likelihood for transcriptions: The mean time scale for the system exits the low concentration state (the longer the exit time, the less likely for transcription) and the switch probability from low concentration states to high concentration states (corresponding to likelihood for transcription). By focusing on the impact of skewness (i.e., non-symmetry) in the probability distributions of noise, we find that the fluctuations in the synthesis reaction rate lead to peculiar transitions to high concentrations and thus to possible transcriptions , such as realizing higher likelihood of transcription for larger positive skewness (i.e., asymmetry) index \\beta, causing a bifurcation for the likelihood of transcription at the critical non-Gaussianity index value \\alpha = 1 (i.e., beyond which the likelihood for transcription suddenly increases), and achieving a turning point at the threshold value \\beta \\approx 0.55 (i.e., beyond which the likelihood for transcription reversed for \\alpha values). The bifurcation and turning point phenomena do not occur in the symmetric noise case (\\beta =0). We conduct a series of numerical experiments about `regulating' the likelihood of gene transcription by tuning asymmetric stable L\\'evy noise indexes. These offer insights for possible ways of achieving gene regulation in experimental research .", "edit_actions": [{"type": "D", "before": "noisy", "after": null, "start_char_pos": 78, "end_char_pos": 83}, {"type": "R", "before": "the kinetic concentration model of a single", "after": "a", "start_char_pos": 155, "end_char_pos": 198}, {"type": "D", "before": "determination of the", "after": null, "start_char_pos": 264, "end_char_pos": 284}, {"type": "D", "before": ", and are modeled by asymmetric stable L\\'evy motions", "after": null, "start_char_pos": 309, "end_char_pos": 362}, {"type": "A", "before": null, "after": "in order", "start_char_pos": 478, "end_char_pos": 478}, {"type": "R", "before": "transition events from low", "after": "likelihood for transcriptions: The mean time scale for the system exits the low concentration state (the longer the exit time, the less likely for transcription) and the switch probability from low concentration states", "start_char_pos": 494, "end_char_pos": 520}, {"type": "R", "before": "state under the influence of L\\'evy noise.We", "after": "states (corresponding to likelihood for transcription). By focusing on the impact of skewness (i.e., non-symmetry) in the probability distributions of noise, we", "start_char_pos": 543, "end_char_pos": 587}, {"type": "R", "before": "of", "after": "in", "start_char_pos": 615, "end_char_pos": 617}, {"type": "R", "before": "higher concentrations (i.e., higher likelihood for transcriptions)", "after": "high concentrations and thus to possible transcriptions", "start_char_pos": 678, "end_char_pos": 744}, {"type": "A", "before": null, "after": "realizing", "start_char_pos": 755, "end_char_pos": 755}, {"type": "R", "before": "parameter (asymmetry index) \\beta, achieving the minimum", "after": "(i.e., asymmetry) index \\beta, causing a bifurcation for the", "start_char_pos": 820, "end_char_pos": 876}, {"type": "R", "before": "by tuning skewness parameter \\beta, achieving the higher likelihood of transcription by tuning", "after": "at the critical", "start_char_pos": 905, "end_char_pos": 999}, {"type": "R", "before": "\\alpha, and a bifurcation for transcription at the critical value \\alpha", "after": "value \\alpha", "start_char_pos": 1022, "end_char_pos": 1094}, {"type": "R", "before": "1.", "after": "1 (i.e., beyond which the likelihood for transcription suddenly increases), and achieving a turning point at the threshold value \\beta \\approx 0.55 (i.e., beyond which the likelihood for transcription reversed for \\alpha values). The bifurcation and turning point phenomena do not occur in the symmetric noise case (\\beta =0).", "start_char_pos": 1097, "end_char_pos": 1099}, {"type": "R", "before": "parameters", "after": "indexes. These offer insights for possible ways of achieving gene regulation in experimental research", "start_char_pos": 1242, "end_char_pos": 1252}], "sents_char_pos": [0, 225, 364, 585, 744, 847]} {"doc_id": "1706.00091", "revision_depth": "1", "before_revision": "Let I(n,l) denote the maximum possible number of incidences between n points and l lines. It is well known that I(n,l) = \\Theta( (nl)^{2/3 + n + l) 2,3,7 . Let C _{SzTr} denote the constant of proportionality of the (nl)^{2/3 term. The known lower bound, due to Elekes 2 , is C _{SzTr} \\ge 2^{-2/3} = 0.63. With a slight modification of Elekes' construction, we show that it can give a better lower bound of C _{SzTr} \\ge 1, i.e., I(n,l) \\ge (nl)^{2/3 . Furthermore, we analyze a different construction given by %DIFDELCMD < \\erdos [%%% 3 } , and show its constant of proportionality to be even better, C _{\\mathrm{SzTr}} \\ge 3/(2^{1/3}\\pi^{2/3}) = 1.11.", "after_revision": "Let I(n,l) denote the maximum possible number of incidences between n points and l lines. It is well known that I(n,l) = \\Theta( n^{2/3 + n + l) . Let c _{SzTr} denote the lower bound on the constant of proportionality of the n^{2/3 term. The known lower bound, due to Elekes , is c _{SzTr} \\ge 2^{-2/3} = 0.63. With a slight modification of Elekes' construction, we show that it can give a better lower bound of c _{SzTr} \\ge 1, i.e., I(n,l) \\ge n^{2/3 . Furthermore, we analyze a different construction given by %DIFDELCMD < \\erdos [%%% Erd \\H o}s , and show its constant of proportionality to be even better, c _{\\mathrm{SzTr}} \\ge 3/(2^{1/3}\\pi^{2/3}) \\approx 1.11.", "edit_actions": [{"type": "R", "before": "(nl)^{2/3", "after": "n^{2/3", "start_char_pos": 129, "end_char_pos": 138}, {"type": "D", "before": "2,3,7", "after": null, "start_char_pos": 148, "end_char_pos": 153}, {"type": "R", "before": ". Let C", "after": ". Let c", "start_char_pos": 154, "end_char_pos": 161}, {"type": "A", "before": null, "after": "lower bound on the", "start_char_pos": 181, "end_char_pos": 181}, {"type": "R", "before": "(nl)^{2/3", "after": "n^{2/3", "start_char_pos": 217, "end_char_pos": 226}, {"type": "D", "before": "2", "after": null, "start_char_pos": 270, "end_char_pos": 271}, {"type": "R", "before": ", is C", "after": ", is c", "start_char_pos": 272, "end_char_pos": 278}, {"type": "R", "before": "C", "after": "c", "start_char_pos": 409, "end_char_pos": 410}, {"type": "R", "before": "(nl)^{2/3", "after": "n^{2/3", "start_char_pos": 443, "end_char_pos": 452}, {"type": "R", "before": "3", "after": "Erd", "start_char_pos": 538, "end_char_pos": 539}, {"type": "A", "before": null, "after": "\\H o", "start_char_pos": 540, "end_char_pos": 540}, {"type": "A", "before": null, "after": "s", "start_char_pos": 541, "end_char_pos": 541}, {"type": "R", "before": "C", "after": "c", "start_char_pos": 604, "end_char_pos": 605}, {"type": "R", "before": "=", "after": "\\approx", "start_char_pos": 648, "end_char_pos": 649}], "sents_char_pos": [0, 89, 232, 307]} {"doc_id": "1706.00330", "revision_depth": "1", "before_revision": "The paradox of the energy transition is that , because of the low marginal costs of new renewable energy sources (RES) , it drags electricity prices down and discourages investments in flexible productions that are needed to compensate for the lack of dispatchability of the new RES - the energy transition discourages the investments that are required for its own harmonious expansion. To investigate how this paradox can be overcome, we argue that future electricity prices can be accurately modeled from the residual load obtained by subtracting the sum of inflexible productions from the load. Armed with the resulting quantitative economic indicator, we investigate future revenues for power plants with various degree of flexibility under different scenarios for the energy transition in the European power grid . We find that flexible productions will be financially rewarded better and sooner if the energy transition proceeds faster but at more or less constant total production, i.e. by reducing the production of thermal power plants at the same rate as the production of RES increases. Less flexible productions, on the other hand, will see their revenue grow more moderately. Our results advocate for a faster energy transition with a quicker withdrawal of baseload thermal power plants .", "after_revision": "The paradox of the energy transition is that the low marginal costs of new renewable energy sources (RES) drag electricity prices down and discourage investments in flexible productions that are needed to compensate for the lack of dispatchability of the new RES . The energy transition thus discourages the investments that are required for its own harmonious expansion. To investigate how this paradox can be overcome, we argue that , under certain assumptions, future electricity prices are rather accurately modeled from the residual load obtained by subtracting non-flexible productions from the load. Armed with the resulting economic indicator, we investigate future revenues for European power plants with various degree of flexibility . We find that , if neither carbon taxes nor fuel prices change, flexible productions would be financially rewarded better and sooner if the energy transition proceeds faster but at more or less constant total production, i.e. by reducing the production of thermal power plants at the same rate as the RES production increases. Less flexible productions, on the other hand, would see their revenue grow more moderately. Our results indicate that a faster energy transition with a quicker withdrawal of thermal power plants would reward flexible productions faster .", "edit_actions": [{"type": "D", "before": ", because of", "after": null, "start_char_pos": 45, "end_char_pos": 57}, {"type": "R", "before": ", it drags", "after": "drag", "start_char_pos": 119, "end_char_pos": 129}, {"type": "R", "before": "discourages", "after": "discourage", "start_char_pos": 158, "end_char_pos": 169}, {"type": "R", "before": "- the energy transition", "after": ". The energy transition thus", "start_char_pos": 283, "end_char_pos": 306}, {"type": "A", "before": null, "after": ", under certain assumptions,", "start_char_pos": 450, "end_char_pos": 450}, {"type": "R", "before": "can be", "after": "are rather", "start_char_pos": 477, "end_char_pos": 483}, {"type": "R", "before": "the sum of inflexible", "after": "non-flexible", "start_char_pos": 550, "end_char_pos": 571}, {"type": "D", "before": "quantitative", "after": null, "start_char_pos": 624, "end_char_pos": 636}, {"type": "A", "before": null, "after": "European", "start_char_pos": 692, "end_char_pos": 692}, {"type": "D", "before": "under different scenarios for the energy transition in the European power grid", "after": null, "start_char_pos": 741, "end_char_pos": 819}, {"type": "R", "before": "flexible productions will", "after": ", if neither carbon taxes nor fuel prices change, flexible productions would", "start_char_pos": 835, "end_char_pos": 860}, {"type": "R", "before": "production of RES", "after": "RES production", "start_char_pos": 1071, "end_char_pos": 1088}, {"type": "R", "before": "will", "after": "would", "start_char_pos": 1146, "end_char_pos": 1150}, {"type": "R", "before": "advocate for", "after": "indicate that", "start_char_pos": 1203, "end_char_pos": 1215}, {"type": "D", "before": "baseload", "after": null, "start_char_pos": 1272, "end_char_pos": 1280}, {"type": "A", "before": null, "after": "would reward flexible productions faster", "start_char_pos": 1302, "end_char_pos": 1302}], "sents_char_pos": [0, 386, 598, 821, 1099, 1190]} {"doc_id": "1706.03341", "revision_depth": "1", "before_revision": "From analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to that the large-scale service system is divided into several adaptive and URLanizing subsystems through scheduling of batch customers and various service resources, which makes that the middle layer of this service system can more effectively be managed and strengthened under a dynamically , real-time and even reward framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only is analysis of group-server queues regarded as a new interesting research direction, but it also exists many theoretic challenging , basic difficulties and open problems in the area of queueing networks.", "after_revision": "By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in many other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to the large-scale service system that is divided into several adaptive and URLanizing subsystems through scheduling of batch customers and regrouping of service resources, which make the middle layer of this service system more effectively managed and strengthened under a dynamic , real-time and even reward optimal framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only analysis of group-server queues is regarded as a new interesting research direction, but there also exists many theoretical challenges , basic difficulties and open problems in the area of queueing networks.", "edit_actions": [{"type": "R", "before": "From", "after": "By", "start_char_pos": 0, "end_char_pos": 4}, {"type": "A", "before": null, "after": "many", "start_char_pos": 638, "end_char_pos": 638}, {"type": "D", "before": "that", "after": null, "start_char_pos": 1094, "end_char_pos": 1098}, {"type": "A", "before": null, "after": "that", "start_char_pos": 1130, "end_char_pos": 1130}, {"type": "R", "before": "various", "after": "regrouping of", "start_char_pos": 1232, "end_char_pos": 1239}, {"type": "R", "before": "makes that", "after": "make", "start_char_pos": 1265, "end_char_pos": 1275}, {"type": "R", "before": "can more effectively be", "after": "more effectively", "start_char_pos": 1316, "end_char_pos": 1339}, {"type": "R", "before": "dynamically", "after": "dynamic", "start_char_pos": 1373, "end_char_pos": 1384}, {"type": "A", "before": null, "after": "optimal", "start_char_pos": 1413, "end_char_pos": 1413}, {"type": "D", "before": "is", "after": null, "start_char_pos": 1597, "end_char_pos": 1599}, {"type": "A", "before": null, "after": "is", "start_char_pos": 1632, "end_char_pos": 1632}, {"type": "R", "before": "it", "after": "there", "start_char_pos": 1687, "end_char_pos": 1689}, {"type": "R", "before": "theoretic challenging", "after": "theoretical challenges", "start_char_pos": 1707, "end_char_pos": 1728}], "sents_char_pos": [0, 258, 359, 538, 785, 1079, 1424, 1576]} {"doc_id": "1707.05260", "revision_depth": "1", "before_revision": "Achieving strong real-time guarantees in multi-core platforms is challenging due to the extensive hardware resource sharing in the memory hierarchy. Modern platforms and OS's, however, provide no means to appropriately handle memory regions that are crucial for real-time performance . In this paper, we propose a new OS-level abstraction, namely Deterministic Memory, to define memory regions that are specially handled by the OS and the hardware to exhibit strong real-time guarantees . We show that the deterministic memory abstraction can be introduced in the OS at the granularity of single memory pages by exploiting existing hardware support. When deterministic memory pages are accessed, the attribute is propagated through all the levels of the memory hierarchy. Clearly, the hardware needs to be designed to ensure real-time handling of deterministic memory requests. To illustrate the potentialities of the new abstraction, we also propose a novel design for a shared cache controller that takes advantage of deterministic memory. Minimum cache space is guaranteed for deterministic memory, while unused cache space is left available to non-real-time applications. We implemented OS support for deterministic memory in the Linux kernel; and we evaluated the proposed hardware modifications in a cycle-accurate full-system simulator. We study the effectiveness of our approach on a set of synthetic and real benchmarks. Results show that it is possible to achieve (i) temporal determinism as strong as with traditional way-based cache partitioning ; and (ii) giving 50\\% of the private partition space, on average, to the non-real-time applications .", "after_revision": "Poor timing predictability of multicore processors has been a long-standing challenge in the real-time systems community . In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time com- puting on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We therefore propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform - the OS and hardware - guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We present how the two memory abstractions can be realized with small extensions to existing OS and hardware architecture. In particular, we show the potential benefits of our approach in the context of shared cache management, by presenting a deterministic memory-aware cache architecture and its manage- ment scheme. We evaluate the effectiveness of the deterministic memory-aware cache management approach compared with a conventional way-based cache partitioning approach, using a set of synthetic and real benchmarks. The results show that our approach achieves (i) the same degree of temporal determinism of traditional way-based cache partitioning for deterministic memory, (ii) while freeing up to 49\\% of additional cache space, on average, for best-effort memory, and consequently improving the cache hit rate by 39\\%, on average, for non-real-time workloads. We also discuss how the deterministic memory abstraction can be leveraged in other parts of the memory hierarchy, particularly in the memory controller .", "edit_actions": [{"type": "R", "before": "Achieving strong", "after": "Poor timing predictability of multicore processors has been a long-standing challenge in the", "start_char_pos": 0, "end_char_pos": 16}, {"type": "R", "before": "guarantees in multi-core platforms is challenging due to the extensive hardware resource sharing in the memory hierarchy. Modern platforms and OS's, however, provide no means to appropriately handle memory regions that are crucial for real-time performance", "after": "systems community", "start_char_pos": 27, "end_char_pos": 283}, {"type": "A", "before": null, "after": "make a case that a fundamental problem that prevents efficient and predictable real-time com- puting on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We therefore", "start_char_pos": 304, "end_char_pos": 304}, {"type": "R", "before": "OS-level abstraction, namely Deterministic Memory, to define memory regions that are specially handled by the OS and the hardware to exhibit strong real-time guarantees . We show that the deterministic memory abstraction can be introduced in the OS at the granularity of single memory pages by exploiting existing", "after": "holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform - the OS and", "start_char_pos": 319, "end_char_pos": 632}, {"type": "R", "before": "support. When deterministic memory pages are accessed,", "after": "- guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We present how the two memory abstractions can be realized with small extensions to existing OS and hardware architecture. In particular, we show the potential benefits of our approach in the context of shared cache management, by presenting a deterministic memory-aware cache architecture and its manage- ment scheme. We evaluate", "start_char_pos": 642, "end_char_pos": 696}, {"type": "D", "before": "attribute is propagated through all the levels of the memory hierarchy. Clearly, the hardware needs to be designed to ensure real-time handling of deterministic memory requests. To illustrate the potentialities of the new abstraction, we also propose a novel design for a shared cache controller that takes advantage of deterministic memory. Minimum cache space is guaranteed for deterministic memory, while unused cache space is left available to non-real-time applications. We implemented OS support for deterministic memory in the Linux kernel; and we evaluated the proposed hardware modifications in a cycle-accurate full-system simulator. We study the", "after": null, "start_char_pos": 701, "end_char_pos": 1357}, {"type": "R", "before": "our approach on a", "after": "the deterministic memory-aware cache management approach compared with a conventional way-based cache partitioning approach, using a", "start_char_pos": 1375, "end_char_pos": 1392}, {"type": "R", "before": "Results show that it is possible to achieve", "after": "The results show that our approach achieves", "start_char_pos": 1431, "end_char_pos": 1474}, {"type": "R", "before": "temporal determinism as strong as with", "after": "the same degree of temporal determinism of", "start_char_pos": 1479, "end_char_pos": 1517}, {"type": "R", "before": "; and", "after": "for deterministic memory,", "start_char_pos": 1559, "end_char_pos": 1564}, {"type": "R", "before": "giving 50\\% of the private partition", "after": "while freeing up to 49\\% of additional cache", "start_char_pos": 1570, "end_char_pos": 1606}, {"type": "R", "before": "to the", "after": "for best-effort memory, and consequently improving the cache hit rate by 39\\%, on average, for", "start_char_pos": 1626, "end_char_pos": 1632}, {"type": "R", "before": "applications", "after": "workloads. We also discuss how the deterministic memory abstraction can be leveraged in other parts of the memory hierarchy, particularly in the memory controller", "start_char_pos": 1647, "end_char_pos": 1659}], "sents_char_pos": [0, 148, 285, 489, 650, 772, 878, 1042, 1176, 1248, 1344, 1430, 1560]} {"doc_id": "1708.04217", "revision_depth": "1", "before_revision": "We introduce a general class of multifractional stochastic processes driven by a multifractional Brownian motion and study estimation of their pointwise H\\\"older exponents (PHE) using the localized generalized quadratic variation approach. By comparing it with the other two benchmark estimation approaches through a simulation study, we show that our estimator has better performance in the case where the observed process is some unknown bivariate function of time and multifractional Brownian motion . The time-varying PHE feature allows us to apply such class of multifractional processes to model stock prices under various market conditions . An empirical study on modeling cross-listed stocks provides new evidence that equity's path roughness varies via time and price informativeness properties from global markets.", "after_revision": "We introduce a general class of stochastic processes driven by a multifractional Brownian motion (mBm) and study estimation problems of their pointwise H\\\"older exponents (PHE) based on a new localized generalized quadratic variation approach. By comparing our suggested approach with the other two existing benchmark estimation approaches (classicial LGQV and oscillation approach) through a simulation study, we show that our estimator has better performance in the case where the observed process is some unknown bivariate function of time and mBm . The time-varying feature of the PHE allows us to apply such class of multifractional processes to model stock prices under various market conditions , that are both time-dependent and region-dependent. As an application to finance, an empirical study on modeling cross-listed stocks provides new evidence that the equity path's roughness varies via time and stock price informativeness properties from global stock markets.", "edit_actions": [{"type": "D", "before": "multifractional", "after": null, "start_char_pos": 32, "end_char_pos": 47}, {"type": "A", "before": null, "after": "(mBm)", "start_char_pos": 113, "end_char_pos": 113}, {"type": "A", "before": null, "after": "problems", "start_char_pos": 135, "end_char_pos": 135}, {"type": "R", "before": "using the", "after": "based on a new", "start_char_pos": 180, "end_char_pos": 189}, {"type": "R", "before": "it", "after": "our suggested approach", "start_char_pos": 255, "end_char_pos": 257}, {"type": "A", "before": null, "after": "existing", "start_char_pos": 277, "end_char_pos": 277}, {"type": "A", "before": null, "after": "(classicial LGQV and oscillation approach)", "start_char_pos": 310, "end_char_pos": 310}, {"type": "R", "before": "multifractional Brownian motion", "after": "mBm", "start_char_pos": 475, "end_char_pos": 506}, {"type": "R", "before": "PHE feature", "after": "feature of the PHE", "start_char_pos": 526, "end_char_pos": 537}, {"type": "R", "before": ". An", "after": ", that are both time-dependent and region-dependent. As an application to finance, an", "start_char_pos": 651, "end_char_pos": 655}, {"type": "R", "before": "equity's path", "after": "the equity path's", "start_char_pos": 731, "end_char_pos": 744}, {"type": "A", "before": null, "after": "stock", "start_char_pos": 775, "end_char_pos": 775}, {"type": "A", "before": null, "after": "stock", "start_char_pos": 821, "end_char_pos": 821}], "sents_char_pos": [0, 241, 508, 652]} {"doc_id": "1708.05469", "revision_depth": "1", "before_revision": " We study the problem of guarding an orthogonal polyhedron having reflex edges in just two directions (as opposed to three ) by placing guards on reflex edges only. We show that (r - g)/2 \\left\\lfloor {2} }\\right\\rfloor +1 reflex edge guards are sufficient, where r is the number of reflex edges in a given polyhedron and g is its genus. This bound is tight for g=0. We thereby generalize a classic planar Art Gallery theorem of O'Rourke, which states that the same upper bound holds for vertex guards in an orthogonal polygon with r reflex vertices and g holes. Then we give a similar upper bound in terms of m, the total number of edges in the polyhedron. We prove that (m - 4)/8 \\left\\lfloor {8} }\\right\\rfloor +g reflex edge guards are sufficient, whereas the previous best known bound was \\lfloor 11m/72+g/6 - 1 \\rfloor edge guards (not necessarily reflex). We also discuss the setting in which guards are open (i.e., they are segments without the endpoints), proving that the same results hold even in this more challenging case. Finally, we show how to compute guard locations in O(n log n) time.", "after_revision": "Let an orthogonal polyhedron be the union of a finite set of boxes in \\mathbb R^3 (i.e., cuboids with edges parallel to the coordinate axes), whose surface is a connected 2-manifold. We study the NP-complete problem of guarding a non-convex orthogonal polyhedron having reflex edges in just two directions (as opposed to three , in the general case ) by placing the minimum number of edge guards on reflex edges only. We show that \\left\\lfloor \\frac{r-g{2} }\\right\\rfloor +1 reflex edge guards are sufficient, where r is the number of reflex edges and g is the polyhedron's genus. This bound is tight for g=0. We thereby generalize a classic planar Art Gallery theorem of O'Rourke, which states that the same upper bound holds for vertex guards in an orthogonal polygon with r reflex vertices and g holes. Then we give a similar upper bound in terms of m, the total number of edges in the polyhedron. We prove that \\left\\lfloor \\frac{m-4{8} }\\right\\rfloor +g reflex edge guards are sufficient, whereas the previous best known bound was \\lfloor 11m/72+g/6 \\rfloor-1 edge guards (not necessarily reflex). We also consider the setting in which guards are open (i.e., they are segments without the endpoints), proving that the same results hold even in this more challenging case. Finally, we show how to compute guard locations matching the above bounds in O(n \\log n) time.", "edit_actions": [{"type": "A", "before": null, "after": "Let an orthogonal polyhedron be the union of a finite set of boxes in \\mathbb R^3 (i.e., cuboids with edges parallel to the coordinate axes), whose surface is a connected 2-manifold.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "A", "before": null, "after": "NP-complete", "start_char_pos": 14, "end_char_pos": 14}, {"type": "R", "before": "an", "after": "a non-convex", "start_char_pos": 35, "end_char_pos": 37}, {"type": "A", "before": null, "after": ", in the general case", "start_char_pos": 124, "end_char_pos": 124}, {"type": "A", "before": null, "after": "the minimum number of edge", "start_char_pos": 138, "end_char_pos": 138}, {"type": "D", "before": "(r - g)/2", "after": null, "start_char_pos": 181, "end_char_pos": 190}, {"type": "A", "before": null, "after": "\\frac{r-g", "start_char_pos": 204, "end_char_pos": 204}, {"type": "D", "before": "in a given polyhedron", "after": null, "start_char_pos": 299, "end_char_pos": 320}, {"type": "R", "before": "its", "after": "the polyhedron's", "start_char_pos": 330, "end_char_pos": 333}, {"type": "D", "before": "(m - 4)/8", "after": null, "start_char_pos": 675, "end_char_pos": 684}, {"type": "A", "before": null, "after": "\\frac{m-4", "start_char_pos": 698, "end_char_pos": 698}, {"type": "D", "before": "- 1", "after": null, "start_char_pos": 816, "end_char_pos": 819}, {"type": "A", "before": null, "after": "-1", "start_char_pos": 827, "end_char_pos": 827}, {"type": "R", "before": "discuss", "after": "consider", "start_char_pos": 874, "end_char_pos": 881}, {"type": "A", "before": null, "after": "matching the above bounds", "start_char_pos": 1087, "end_char_pos": 1087}, {"type": "R", "before": "log", "after": "\\log", "start_char_pos": 1095, "end_char_pos": 1098}], "sents_char_pos": [0, 167, 340, 369, 565, 660, 865, 1038]} {"doc_id": "1709.00765", "revision_depth": "1", "before_revision": "We consider the problem of implementing a given conditional distribution relating the states of a physical system at two separate times using a physical process with (potentially time-inhomogeneous) master equation dynamics . This problem arises implicitly in many nonequilibrium statistical physics scenarios, e.g., when designing processes to implement some desired computations , feedback-control protocols, and Maxwellian demons. However it is known that many such conditional distributions P over a state space X cannot be implemented using master equation dynamics over just the states in X. Here we show that any conditional distribution P can be implemented---if the process has access to additional \"hidden\" states, not in X. In particular, we show that any conditional distribution can be implemented in a thermodynamically reversible manner (achieving zero entropy production) if there are enough hidden states available. We investigate how the minimal number of such states needed to implement any P in a thermodynamically reversible manner depends on P. We provide exact results in the special case of conditional distributions that reduce to single-valued functions. For the fully general case, we provide an upper bound in terms of the nonnegative rank of P. In particular, we show that having access to one extra binary degree of freedom (doubling the number of states) is sufficient to carry out any P . Our results provide a novel type of bound on the physical resources needed to perform information processing---the size of a system's state space.", "after_revision": "We consider the problem of how to construct a physical process over a state space X that applies some desired conditional distribution P to initial states to produce final states . This problem arises in various scenarios in thermodynamics of computation and nonequilibrium statistical physics ( e.g., when designing processes to implement some desired computation , feedback-control protocol, or Maxwellian demon). It is known that there are classes of conditional distributions that cannot be implemented using any time-inhomogeneous master equation dynamics involving just the states in X. Here we show that any conditional distribution P can however be implemented if the master equation dynamics has access to additional \"hidden\" states, not in X. We investigate how the minimal number of such hidden states needed to implement some P in a thermodynamically reversible manner depends on P. We provide exact results in the special case of conditional distributions that represent single-valued functions. In the general case, we provide an upper bound on the needed number of hidden states in terms of the nonnegative rank of P. In particular, we show that if there are no constraints on what master equation we can construct, then having access to one extra binary degree of freedom (doubling the total number of states) is sufficient to carry out any P with zero entropy production . Our results also imply that for certain P that can be implemented without hidden states, having additional states available permits an implementation that generates less heat. These results can be seen as uncovering and investigating a novel type of cost of the physical resources needed to perform information processing---the size of a system's hidden state space.", "edit_actions": [{"type": "R", "before": "implementing a given conditional distribution relating the states of a physical system at two separate times using a physical process with (potentially time-inhomogeneous) master equation dynamics", "after": "how to construct a physical process over a state space X that applies some desired conditional distribution P to initial states to produce final states", "start_char_pos": 27, "end_char_pos": 223}, {"type": "R", "before": "implicitly in many", "after": "in various scenarios in thermodynamics of computation and", "start_char_pos": 246, "end_char_pos": 264}, {"type": "R", "before": "scenarios,", "after": "(", "start_char_pos": 300, "end_char_pos": 310}, {"type": "R", "before": "computations", "after": "computation", "start_char_pos": 368, "end_char_pos": 380}, {"type": "R", "before": "protocols, and Maxwellian demons. However it", "after": "protocol, or Maxwellian demon). It", "start_char_pos": 400, "end_char_pos": 444}, {"type": "R", "before": "many such conditional distributions P over a state space X", "after": "there are classes of conditional distributions that", "start_char_pos": 459, "end_char_pos": 517}, {"type": "A", "before": null, "after": "any time-inhomogeneous", "start_char_pos": 546, "end_char_pos": 546}, {"type": "R", "before": "over", "after": "involving", "start_char_pos": 572, "end_char_pos": 576}, {"type": "R", "before": "be implemented---if the process", "after": "however be implemented if the master equation dynamics", "start_char_pos": 652, "end_char_pos": 683}, {"type": "D", "before": "In particular, we show that any conditional distribution can be implemented in a thermodynamically reversible manner (achieving zero entropy production) if there are enough hidden states available.", "after": null, "start_char_pos": 736, "end_char_pos": 933}, {"type": "A", "before": null, "after": "hidden", "start_char_pos": 980, "end_char_pos": 980}, {"type": "R", "before": "any", "after": "some", "start_char_pos": 1008, "end_char_pos": 1011}, {"type": "R", "before": "reduce to", "after": "represent", "start_char_pos": 1148, "end_char_pos": 1157}, {"type": "R", "before": "For the fully", "after": "In the", "start_char_pos": 1183, "end_char_pos": 1196}, {"type": "A", "before": null, "after": "on the needed number of hidden states", "start_char_pos": 1237, "end_char_pos": 1237}, {"type": "A", "before": null, "after": "if there are no constraints on what master equation we can construct, then", "start_char_pos": 1305, "end_char_pos": 1305}, {"type": "A", "before": null, "after": "total", "start_char_pos": 1372, "end_char_pos": 1372}, {"type": "A", "before": null, "after": "with zero entropy production", "start_char_pos": 1424, "end_char_pos": 1424}, {"type": "R", "before": "provide", "after": "also imply that for certain P that can be implemented without hidden states, having additional states available permits an implementation that generates less heat. These results can be seen as uncovering and investigating", "start_char_pos": 1439, "end_char_pos": 1446}, {"type": "R", "before": "bound on", "after": "cost of", "start_char_pos": 1463, "end_char_pos": 1471}, {"type": "A", "before": null, "after": "hidden", "start_char_pos": 1561, "end_char_pos": 1561}], "sents_char_pos": [0, 225, 433, 598, 735, 933, 1068, 1182, 1276]} {"doc_id": "1709.01190", "revision_depth": "1", "before_revision": "We present FLASH ( %DIFDELCMD < {\\bf %%% F ast%DIFDELCMD < {\\bf %%% L SH%DIFDELCMD < {\\bf %%% A lgorithm for%DIFDELCMD < {\\bf %%% S imilarity search accelerated with %DIFDELCMD < {\\bf %%% H PC(High-Performance Computing) \\textbf{ ), a similarity search system for ultra-high dimensional datasets on a single machine, which does not require similarity computation. Our system is an auspicious illustration of the power of randomized algorithms carefully tailored for high-performance computing platforms. We leverage LSH style randomized indexing procedure and combine it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations . The combination, while retaining sound theoretical guarantees , reduces the computational as well as parallelization overhead of our proposal. We provide CPU and hybrid CPU-GPU implementations of FLASH for replicability of our results URL We evaluate FLASH on several real high dimensional datasets coming from different domains including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several millions of dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than our system . FLASH is capable of computing an approximate k-NN graph, from scratch, over full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing full k-NN graph in less than 10 seconds on webspam dataset, using brute-force (n^2D), will require at least 20 TFLOPS. We hope that FLASH gets adopted in practice .", "after_revision": "We present FLASH ( %DIFDELCMD < {\\bf %%% %DIFDELCMD < {\\bf %%% %DIFDELCMD < {\\bf %%% %DIFDELCMD < {\\bf %%% FastLSHAlgorithm forS imilarity search accelerated with %DIFDELCMD < {\\bf %%% \\textbf{H PC ), a similarity search system for ultra-high dimensional datasets on a single machine, that does not require similarity computations and is tailored for high-performance computing platforms. By leveraging a LSH style randomized indexing procedure and combining it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations , we reduce the computational and parallelization costs of similarity search, while retaining sound theoretical guarantees . We evaluate FLASH on several real , high-dimensional datasets from different domains , including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several million dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than FLASH . FLASH is capable of computing an approximate k-NN graph, from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam dataset, using brute-force (n^2D), will require at least 20 teraflops. We provide CPU and GPU implementations of FLASH for replicability of our results .", "edit_actions": [{"type": "D", "before": "F", "after": null, "start_char_pos": 41, "end_char_pos": 42}, {"type": "D", "before": "ast", "after": null, "start_char_pos": 43, "end_char_pos": 46}, {"type": "D", "before": "L", "after": null, "start_char_pos": 68, "end_char_pos": 69}, {"type": "D", "before": "SH", "after": null, "start_char_pos": 70, "end_char_pos": 72}, {"type": "D", "before": "A", "after": null, "start_char_pos": 94, "end_char_pos": 95}, {"type": "D", "before": "lgorithm for", "after": null, "start_char_pos": 96, "end_char_pos": 108}, {"type": "R", "before": "S", "after": "F", "start_char_pos": 130, "end_char_pos": 131}, {"type": "A", "before": null, "after": "ast", "start_char_pos": 131, "end_char_pos": 131}, {"type": "A", "before": null, "after": "L", "start_char_pos": 131, "end_char_pos": 131}, {"type": "A", "before": null, "after": "SH", "start_char_pos": 131, "end_char_pos": 131}, {"type": "A", "before": null, "after": "A", "start_char_pos": 131, "end_char_pos": 131}, {"type": "A", "before": null, "after": "lgorithm for", "start_char_pos": 131, "end_char_pos": 131}, {"type": "A", "before": null, "after": "S", "start_char_pos": 131, "end_char_pos": 131}, {"type": "D", "before": "H", "after": null, "start_char_pos": 188, "end_char_pos": 189}, {"type": "D", "before": "PC(High-Performance Computing)", "after": null, "start_char_pos": 190, "end_char_pos": 220}, {"type": "A", "before": null, "after": "H", "start_char_pos": 229, "end_char_pos": 229}, {"type": "A", "before": null, "after": "PC", "start_char_pos": 230, "end_char_pos": 230}, {"type": "R", "before": "which", "after": "that", "start_char_pos": 318, "end_char_pos": 323}, {"type": "R", "before": "computation. Our system is an auspicious illustration of the power of randomized algorithms carefully", "after": "computations and is", "start_char_pos": 352, "end_char_pos": 453}, {"type": "R", "before": "We leverage", "after": "By leveraging a", "start_char_pos": 505, "end_char_pos": 516}, {"type": "R", "before": "combine", "after": "combining", "start_char_pos": 561, "end_char_pos": 568}, {"type": "R", "before": ". The combination,", "after": ", we reduce the computational and parallelization costs of similarity search,", "start_char_pos": 709, "end_char_pos": 727}, {"type": "R", "before": ", reduces the computational as well as parallelization overhead of our proposal. We provide CPU and hybrid CPU-GPU implementations of FLASH for replicability of our results URL We", "after": ". We", "start_char_pos": 773, "end_char_pos": 952}, {"type": "R", "before": "high dimensional datasets coming", "after": ", high-dimensional datasets", "start_char_pos": 984, "end_char_pos": 1016}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1040, "end_char_pos": 1040}, {"type": "R", "before": "millions of", "after": "million", "start_char_pos": 1211, "end_char_pos": 1222}, {"type": "R", "before": "our system", "after": "FLASH", "start_char_pos": 1350, "end_char_pos": 1360}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1439, "end_char_pos": 1439}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1519, "end_char_pos": 1519}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1563, "end_char_pos": 1563}, {"type": "R", "before": "TFLOPS. We hope that FLASH gets adopted in practice", "after": "teraflops. We provide CPU and GPU implementations of FLASH for replicability of our results", "start_char_pos": 1632, "end_char_pos": 1683}], "sents_char_pos": [0, 364, 504, 710, 853, 1119, 1234, 1508, 1639]} {"doc_id": "1709.08075", "revision_depth": "1", "before_revision": "The calibration of local volatility models from observable option prices has always been an important problem in quantitative finance. The classical formula by Dupire, despite being heavily used by industry practitioners , can potentially produce unstable or even singular volatility surfaces ] . In this paper, we propose a new local volatility calibration technique using the theory of optimal transport. Inspired by the work of Benamou and Brenier, we formulate a martingale optimal transport problem which seeks a diffusion process that matches the known densities of an asset price at two different dates and minimizes a chosen cost function. The process effectively interpolates the dynamic of the asset price between the two dates while recovering the local volatility function. This approach leads to a convex optimisation problemand it is numerically solved ] via an augmented Lagrangian method and the alternative direction method of multipliers (ADMM) algorithm .", "after_revision": "The calibration of volatility models from observable option prices is a fundamental problem in quantitative finance. The most common approach among industry practitioners is based on the celebrated Dupire's formula 6], which requires the knowledge of vanilla option prices for a continuum of strikes and maturities that can only be obtained via some form of price interpolation . In this paper, we propose a new local volatility calibration technique using the theory of optimal transport. We formulate a time continuous martingale optimal transport problem , which seeks a martingale diffusion process that matches the known densities of an asset price at two different dates , while minimizing a chosen cost function. Inspired by the seminal work of Benamou and Brenier 1], we formulate the problem as a convex optimization problem, derive its dual formulation, and solve it numerically via an augmented Lagrangian method and the alternative direction method of multipliers (ADMM) algorithm . The solution effectively reconstructs the dynamic of the asset price between the two dates by recovering the optimal local volatility function, without requiring any time interpolation of the option prices .", "edit_actions": [{"type": "D", "before": "local", "after": null, "start_char_pos": 19, "end_char_pos": 24}, {"type": "R", "before": "has always been an important", "after": "is a fundamental", "start_char_pos": 73, "end_char_pos": 101}, {"type": "R", "before": "classical formula by Dupire, despite being heavily used by industry practitioners , can potentially produce unstable or even singular volatility surfaces", "after": "most common approach among industry practitioners is based on the celebrated Dupire's formula", "start_char_pos": 139, "end_char_pos": 292}, {"type": "A", "before": null, "after": "6", "start_char_pos": 293, "end_char_pos": 293}, {"type": "A", "before": null, "after": ", which requires the knowledge of vanilla option prices for a continuum of strikes and maturities that can only be obtained via some form of price interpolation", "start_char_pos": 294, "end_char_pos": 294}, {"type": "R", "before": "Inspired by the work of Benamou and Brenier, we formulate a", "after": "We formulate a time continuous", "start_char_pos": 407, "end_char_pos": 466}, {"type": "A", "before": null, "after": ",", "start_char_pos": 504, "end_char_pos": 504}, {"type": "A", "before": null, "after": "martingale", "start_char_pos": 519, "end_char_pos": 519}, {"type": "R", "before": "and minimizes", "after": ", while minimizing", "start_char_pos": 612, "end_char_pos": 625}, {"type": "R", "before": "The process effectively interpolates the dynamic of the asset price between the two dates while recovering the local volatility function. This approach leads to a convex optimisation problemand it is numerically solved", "after": "Inspired by the seminal work of Benamou and Brenier", "start_char_pos": 650, "end_char_pos": 868}, {"type": "A", "before": null, "after": "1", "start_char_pos": 869, "end_char_pos": 869}, {"type": "A", "before": null, "after": ", we formulate the problem as a convex optimization problem, derive its dual formulation, and solve it numerically", "start_char_pos": 870, "end_char_pos": 870}, {"type": "A", "before": null, "after": ". The solution effectively reconstructs the dynamic of the asset price between the two dates by recovering the optimal local volatility function, without requiring any time interpolation of the option prices", "start_char_pos": 975, "end_char_pos": 975}], "sents_char_pos": [0, 134, 406, 649, 787]} {"doc_id": "1709.09209", "revision_depth": "1", "before_revision": "We present an efficient algorithm for a problem in the interface between clustering and graph embeddings. An embedding \\varphi:G\\rightarrow M of a graph G into a 2-manifold M maps the vertices in V(G) to distinct points and the edges in E(G) to interior-disjoint Jordan arcs between the corresponding vertices. In applications in clustering, cartography, and visualization, nearby vertices and edges are often bundled to a common node or arc , due to data compression or low resolution. This raises the computational problem of deciding whether a given map \\varphi:G\\rightarrow M comes from an embedding. A map \\varphi:G\\rightarrow M is a weak embedding if it can be perturbed into an embedding \\psi_\\varepsilon:G\\rightarrow M with \\|\\varphi-\\psi_\\varepsilon\\|<\\varepsilon for every \\varepsilon> 0. A polynomial-time algorithm for recognizing weak embeddings was recently found by Fulek and Kyncl , 2017 , which reduces to solving a system of linear equations over Z_2. It runs in O(n^{2\\omega})\\leq O(n^{4.75}) time, where \\omega \\approx 2.373 is the matrix multiplication exponent and n is the number of vertices and edges of G. We improve the running time to O(n\\log n). Our algorithm is also conceptually simpler than Fulek and Kyn\\v{c : We perform a sequence of \\emph{local operations} that gradually \"untangles\" the image \\varphi(G) into an embedding \\psi(G), or reports that \\varphi is not a weak embedding. It generalizes a recent technique developed for the case that G is a cycle and the embedding is a simple polygon Akitaya et al., 2016%DIFDELCMD < ]%%% , and combines local constraints on the orientation of subgraphs directly, thereby eliminating the need for solving large systems of linear equations.", "after_revision": "We present an efficient algorithm for a problem in the interface between clustering and graph embeddings. An embedding \\varphi:G\\rightarrow M of a graph G into a 2-manifold M maps the vertices in V(G) to distinct points and the edges in E(G) to interior-disjoint Jordan arcs between the corresponding vertices. In applications in clustering, cartography, and visualization, nearby vertices and edges are often bundled to the same point or overlapping arcs , due to data compression or low resolution. This raises the computational problem of deciding whether a given map \\varphi:G\\rightarrow M comes from an embedding. A map \\varphi:G\\rightarrow M is a weak embedding if it can be perturbed into an embedding \\psi_\\varepsilon:G\\rightarrow M with \\|\\varphi-\\psi_\\varepsilon\\|<\\varepsilon for every \\varepsilon> 0, where \\|.\\| is the unform norm. A polynomial-time algorithm for recognizing weak embeddings has recently been found by Fulek and Kyncl . It reduces the problem to solving a system of linear equations over Z_2. It runs in O(n^{2\\omega})\\leq O(n^{4.75}) time, where \\omega \\in 2, 2.373 ) is the matrix multiplication exponent and n is the number of vertices and edges of G. We improve the running time to O(n\\log n). Our algorithm is also conceptually simpler : We perform a sequence of \\emph{local operations} that gradually \"untangles\" the image \\varphi(G) into an embedding \\psi(G), or reports that \\varphi is not a weak embedding. It %DIFDELCMD < ]%%% combines local constraints on the orientation of subgraphs directly, thereby eliminating the need for solving large systems of linear equations.", "edit_actions": [{"type": "R", "before": "a common node or arc", "after": "the same point or overlapping arcs", "start_char_pos": 421, "end_char_pos": 441}, {"type": "R", "before": "weak embedding", "after": "weak embedding", "start_char_pos": 639, "end_char_pos": 653}, {"type": "R", "before": "0.", "after": "0, where \\|.\\| is the unform norm.", "start_char_pos": 796, "end_char_pos": 798}, {"type": "R", "before": "was recently found by", "after": "has recently been found by", "start_char_pos": 859, "end_char_pos": 880}, {"type": "D", "before": ", 2017", "after": null, "start_char_pos": 897, "end_char_pos": 903}, {"type": "R", "before": ", which reduces", "after": ". It reduces the problem", "start_char_pos": 904, "end_char_pos": 919}, {"type": "R", "before": "\\approx", "after": "\\in", "start_char_pos": 1031, "end_char_pos": 1038}, {"type": "A", "before": null, "after": "2,", "start_char_pos": 1039, "end_char_pos": 1039}, {"type": "A", "before": null, "after": ")", "start_char_pos": 1046, "end_char_pos": 1046}, {"type": "D", "before": "than", "after": null, "start_char_pos": 1219, "end_char_pos": 1223}, {"type": "D", "before": "Fulek and Kyn\\v{c", "after": null, "start_char_pos": 1224, "end_char_pos": 1241}, {"type": "D", "before": "generalizes a recent technique developed for the case that G is a cycle and the embedding is a simple polygon", "after": null, "start_char_pos": 1420, "end_char_pos": 1529}, {"type": "D", "before": "Akitaya et al., 2016", "after": null, "start_char_pos": 1530, "end_char_pos": 1550}, {"type": "D", "before": ", and", "after": null, "start_char_pos": 1568, "end_char_pos": 1573}], "sents_char_pos": [0, 105, 310, 486, 604, 969, 1175, 1416]} {"doc_id": "1710.10888", "revision_depth": "1", "before_revision": "Let P be a set of n points in the plane and _{\\theta}(P), has minimum (or maximum) area in optimal O(n\\log n) time and O(n) space, improving the previous O(n^2) bound. Let } O be a set of k lines passing through the origin . We show=\\pi-\\alpha_i and \\Theta=\\min\\{\\Theta_i:i=1,\\ldots,2k\\}. We further obtain} : (1) How such that \\Theta}\\ge to compute the O -hull of P in \\Theta (n\\log n) time and O(n) space, ({2} the complexities are O(\\frac{n}{\\Theta}\\log n) time and O(\\frac{n}{\\Theta}) space. (} 2) how to such that \\Theta}\\ge compute and maintain the rotated hull \\mathcal{OH_{\\theta}(P ) } }} for \\theta\\in [0,2\\pi) in O(kn\\log n) time and O(kn) space, and (3) how to compute in \\Theta(n{\\Theta}} \\log n) time and O( n) space a value of \\theta for which the rectilinear convex hull, \\mathcal{RH_{\\theta}(P ), has minimum area, thus improving the previously best O(n^2) algorithm presented by Bae et al.in 2009.} {\\Theta}) space if \\Theta<\\frac{\\pi}{2}. (3) Finally, given a set \\mathcal{O} such that \\Theta}\\ge }}-convex hull of P of minimum (or maximum) area over all \\theta\\in }[", "after_revision": "Let P be a set of n points in the plane . We compute the value of \\theta\\in 0,2\\pi) for which the rectilinear convex hull of P, denoted by \\mathcal{RH_{\\theta}(P), has minimum (or maximum) area in optimal O(n\\log n) time and O(n) space, improving the previous O(n^2) bound. Let } O be a set of k lines through the origin sorted by slope and let \\alpha_i be the aperture angles of the 2k sectors defined by every pair of two consecutive lines. Let \\Theta_{i=\\pi-\\alpha_i and \\Theta=\\min\\{\\Theta_i:i=1,\\ldots,2k\\}. We further obtain} : (1) Given a set \\mathcal{O such that \\Theta}\\ge\\frac{\\pi to compute the O -convex hull of P in optimal O (n\\log n) time and O(n) space, while if \\Theta<\\frac{\\pi{2} the complexities are O(\\frac{n}{\\Theta}\\log n) time and O(\\frac{n}{\\Theta}) space. (} 2) Given a set \\mathcal{O such that \\Theta}\\ge\\frac{\\pi compute and maintain the _{\\theta}(P ) } boundary of the \\mathcal{O}}_{\\theta for \\theta\\in [0,2\\pi) in O(kn\\log n) time and O(kn) space, or in O(k\\frac{n{\\Theta}} \\log n) time and O( _{\\theta}(P ), has minimum area, thus improving the previously best O(n^2) algorithm presented by Bae et al.in 2009.} k\\frac{n{\\Theta}) space if \\Theta<\\frac{\\pi}{2}. (3) Finally, given a set \\mathcal{O} such that \\Theta}\\ge\\frac{\\pi \\mathcal{O}}_{\\theta-convex hull of P of minimum (or maximum) area over all \\theta\\in }[0,2\\pi) in O(kn\\log n) time and O(kn) space.", "edit_actions": [{"type": "R", "before": "and", "after": ". We compute the value of \\theta\\in", "start_char_pos": 40, "end_char_pos": 43}, {"type": "A", "before": null, "after": "0,2\\pi) for which the rectilinear convex hull of P, denoted by \\mathcal{RH", "start_char_pos": 44, "end_char_pos": 44}, {"type": "D", "before": "passing", "after": null, "start_char_pos": 196, "end_char_pos": 203}, {"type": "R", "before": ". We show", "after": "sorted by slope and let \\alpha_i be the aperture angles of the 2k sectors defined by every pair of two consecutive lines. Let \\Theta_{i", "start_char_pos": 223, "end_char_pos": 232}, {"type": "R", "before": "How", "after": "Given a set \\mathcal{O", "start_char_pos": 314, "end_char_pos": 317}, {"type": "A", "before": null, "after": "\\frac{\\pi", "start_char_pos": 338, "end_char_pos": 338}, {"type": "R", "before": "-hull", "after": "-convex hull", "start_char_pos": 356, "end_char_pos": 361}, {"type": "R", "before": "\\Theta", "after": "optimal O", "start_char_pos": 370, "end_char_pos": 376}, {"type": "R", "before": "(", "after": "while if \\Theta<\\frac{\\pi", "start_char_pos": 408, "end_char_pos": 409}, {"type": "R", "before": "how to", "after": "Given a set \\mathcal{O", "start_char_pos": 502, "end_char_pos": 508}, {"type": "A", "before": null, "after": "\\frac{\\pi", "start_char_pos": 529, "end_char_pos": 529}, {"type": "D", "before": "rotated hull \\mathcal{OH", "after": null, "start_char_pos": 555, "end_char_pos": 579}, {"type": "A", "before": null, "after": "boundary of the", "start_char_pos": 595, "end_char_pos": 595}, {"type": "A", "before": null, "after": "\\mathcal{O", "start_char_pos": 596, "end_char_pos": 596}, {"type": "A", "before": null, "after": "_{\\theta", "start_char_pos": 598, "end_char_pos": 598}, {"type": "R", "before": "and (3) how to compute in \\Theta(n", "after": "or in O(k\\frac{n", "start_char_pos": 659, "end_char_pos": 693}, {"type": "D", "before": "n) space a value of \\theta for which the rectilinear convex hull, \\mathcal{RH", "after": null, "start_char_pos": 723, "end_char_pos": 800}, {"type": "A", "before": null, "after": "k\\frac{n", "start_char_pos": 918, "end_char_pos": 918}, {"type": "A", "before": null, "after": "\\frac{\\pi", "start_char_pos": 1016, "end_char_pos": 1016}, {"type": "A", "before": null, "after": "\\mathcal{O", "start_char_pos": 1017, "end_char_pos": 1017}, {"type": "A", "before": null, "after": "_{\\theta", "start_char_pos": 1019, "end_char_pos": 1019}, {"type": "A", "before": null, "after": "0,2\\pi) in O(kn\\log n) time and O(kn) space.", "start_char_pos": 1087, "end_char_pos": 1087}], "sents_char_pos": [0, 167, 224, 288, 495, 916]} {"doc_id": "1711.02048", "revision_depth": "4", "before_revision": "Many experiments randomly assign individuals to either a treatment group with access to a program or a control group without such access. I study what we can learn about the average effects of providing access to the program given data from such experiments when individuals do not comply with their assigned status and when the data may only provide partial information on the receipt of program access across individuals. I propose a new nonparametric multiple treatment selection model that provides a general setup to define a range of parameters evaluating the effects of program access and to exploit the partial information the data may provide on where access was received . I illustrate how a computational procedure can be used to learn about these parameters in this model. Using the developed framework , I analyze the effects of providing access to the Head Start preschool program given data from the Head Start Impact Study. I find that providing access to Head Start can have positive effects on test scores and that these effects can negatively depend on whether access to an alternative preschool was available. In addition , I find that the earning benefits associated with the test score gains can outweigh the net costs for various policies providing access to Head Start .", "after_revision": "I develop tools to learn about the average effects of providing an offer to participate in a program given data from an experiment that randomizes offers. I allow for the complication that individuals may not comply with their assigned status in the sense that those who are not provided an offer may receive one from outside the experiment, and that the data may provide only partial information on the receipt of an offer across individuals. To do so, I propose a new nonparametric selection model with unobserved choice sets that provides a conceptual framework to define a range of parameters evaluating the effects of an offer, exploit the partial information available on offer receipt, and also consider an array of identifying assumptions . I illustrate how a computational procedure can be used to sharply learn about the parameters under the various assumptions. Using these tools , I analyze the effects of a policy that provides an offer to participate in the Head Start preschool program given data from the Head Start Impact Study. I find that such a policy affects a large number of children who take up the offer, and that they subsequently have positive effects on test scores . These effects primarily arise from children who do not have any preschool as an outside option. Performing a cost-benefit analysis , I find that the earning benefits associated with the test score gains can outweigh the net costs associated with the take up of the offer .", "edit_actions": [{"type": "R", "before": "Many experiments randomly assign individuals to either a treatment group with access to a program or a control group without such access. I study what we can", "after": "I develop tools to", "start_char_pos": 0, "end_char_pos": 157}, {"type": "R", "before": "access to the", "after": "an offer to participate in a", "start_char_pos": 203, "end_char_pos": 216}, {"type": "R", "before": "such experiments when individuals do", "after": "an experiment that randomizes offers. I allow for the complication that individuals may", "start_char_pos": 241, "end_char_pos": 277}, {"type": "R", "before": "and when", "after": "in the sense that those who are not provided an offer may receive one from outside the experiment, and that", "start_char_pos": 316, "end_char_pos": 324}, {"type": "R", "before": "only provide", "after": "provide only", "start_char_pos": 338, "end_char_pos": 350}, {"type": "R", "before": "program access", "after": "an offer", "start_char_pos": 389, "end_char_pos": 403}, {"type": "A", "before": null, "after": "To do so,", "start_char_pos": 424, "end_char_pos": 424}, {"type": "R", "before": "multiple treatment selection model", "after": "selection model with unobserved choice sets", "start_char_pos": 455, "end_char_pos": 489}, {"type": "R", "before": "general setup", "after": "conceptual framework", "start_char_pos": 506, "end_char_pos": 519}, {"type": "R", "before": "program access and to", "after": "an offer,", "start_char_pos": 578, "end_char_pos": 599}, {"type": "R", "before": "the data may provide on where access was received", "after": "available on offer receipt, and also consider an array of identifying assumptions", "start_char_pos": 632, "end_char_pos": 681}, {"type": "R", "before": "learn about these parameters in this model. Using the developed framework", "after": "sharply learn about the parameters under the various assumptions. Using these tools", "start_char_pos": 742, "end_char_pos": 815}, {"type": "R", "before": "providing access to", "after": "a policy that provides an offer to participate in", "start_char_pos": 843, "end_char_pos": 862}, {"type": "R", "before": "providing access to Head Start can", "after": "such a policy affects a large number of children who take up the offer, and that they subsequently", "start_char_pos": 953, "end_char_pos": 987}, {"type": "R", "before": "and that these effects can negatively depend on whether access to an alternative preschool was available. In addition", "after": ". These effects primarily arise from children who do not have any preschool as an outside option. Performing a cost-benefit analysis", "start_char_pos": 1025, "end_char_pos": 1142}, {"type": "R", "before": "for various policies providing access to Head Start", "after": "associated with the take up of the offer", "start_char_pos": 1242, "end_char_pos": 1293}], "sents_char_pos": [0, 137, 423, 683, 785, 940, 1130]} {"doc_id": "1711.03709", "revision_depth": "1", "before_revision": "Many recent caching systems aim to improve hit ratios, but there is no good sense among practitioners of how much further hit ratios can be improved. In other words, should the systems community continue working on this problem? Currently, there is no principled answer to this question. Most prior work assumes that objects have the same size, but in practice object sizes often vary by several orders of magnitude . The few known results for variable object sizes provide very weak guarantees and are impractical to compute on traces of realistic length. We propose a new method to compute the offline optimal hit ratio under variable object sizes . Our key insight is to represent caching as a min-cost flow problem, hence we call our method the flow-based offline optimal (FOO). We show that, under simple independence assumptions and Zipf popularities , FOO's bounds become tight as the number of objects goes to infinity. From FOOwe develop fast, practical methods to compute nearly tight bounds for the optimal hit ratio, which we call practical flow-based offline optimal (P-FOO). P-FOO enables the first analysis of optimal caching on realistic traces with hundreds of millions of requests. We evaluate P-FOO on several production traces, where results show that recent caching systems are still far from optimal .", "after_revision": "Many recent caching systems aim to improve miss ratios, but there is no good sense among practitioners of how much further miss ratios can be improved. In other words, should the systems community continue working on this problem? Currently, there is no principled answer to this question. In practice, object sizes often vary by several orders of magnitude , where computing the optimal miss ratio (OPT) is known to be NP-hard . The few known results on caching with variable object sizes provide very weak bounds and are impractical to compute on traces of realistic length. We propose a new method to compute upper and lower bounds on OPT . Our key insight is to represent caching as a min-cost flow problem, hence we call our method the flow-based offline optimal (FOO). We prove that, under simple independence assumptions , FOO's bounds become tight as the number of objects goes to infinity. Indeed, FOO's error over 10M requests of production CDN and storage traces is negligible: at most 0.3\\%. FOO thus reveals, for the first time, the limits of caching with variable object sizes. While FOO is very accurate, it is computationally impractical on traces with hundreds of millions of requests. We therefore extend FOO to obtain more efficient bounds on OPT, which we call practical flow-based offline optimal (PFOO). We evaluate PFOO on several full production traces and use it to compare OPT to prior online policies. This analysis shows that current caching systems are in fact still far from optimal , suffering 11-43\\% more cache misses than OPT, whereas the best prior offline bounds suggest that there is essentially no room for improvement .", "edit_actions": [{"type": "R", "before": "hit", "after": "miss", "start_char_pos": 43, "end_char_pos": 46}, {"type": "R", "before": "hit", "after": "miss", "start_char_pos": 122, "end_char_pos": 125}, {"type": "R", "before": "Most prior work assumes that objects have the same size, but in practice", "after": "In practice,", "start_char_pos": 288, "end_char_pos": 360}, {"type": "A", "before": null, "after": ", where computing the optimal miss ratio (OPT) is known to be NP-hard", "start_char_pos": 416, "end_char_pos": 416}, {"type": "R", "before": "for", "after": "on caching with", "start_char_pos": 441, "end_char_pos": 444}, {"type": "R", "before": "guarantees", "after": "bounds", "start_char_pos": 485, "end_char_pos": 495}, {"type": "R", "before": "the offline optimal hit ratio under variable object sizes", "after": "upper and lower bounds on OPT", "start_char_pos": 593, "end_char_pos": 650}, {"type": "R", "before": "show", "after": "prove", "start_char_pos": 787, "end_char_pos": 791}, {"type": "D", "before": "and Zipf popularities", "after": null, "start_char_pos": 836, "end_char_pos": 857}, {"type": "R", "before": "From FOOwe develop fast, practical methods to compute nearly tight bounds for the optimal hit ratio, which we call practical flow-based offline optimal (P-FOO). P-FOO enables the first analysis of optimal caching on realistic", "after": "Indeed, FOO's error over 10M requests of production CDN and storage traces is negligible: at most 0.3\\%. FOO thus reveals, for the first time, the limits of caching with variable object sizes. While FOO is very accurate, it is computationally impractical on", "start_char_pos": 929, "end_char_pos": 1154}, {"type": "R", "before": "evaluate P-FOO on several production traces, where results show that recent", "after": "therefore extend FOO to obtain more efficient bounds on OPT, which we call practical flow-based offline optimal (PFOO). We evaluate PFOO on several full production traces and use it to compare OPT to prior online policies. This analysis shows that current", "start_char_pos": 1204, "end_char_pos": 1279}, {"type": "A", "before": null, "after": "in fact", "start_char_pos": 1300, "end_char_pos": 1300}, {"type": "A", "before": null, "after": ", suffering 11-43\\% more cache misses than OPT, whereas the best prior offline bounds suggest that there is essentially no room for improvement", "start_char_pos": 1324, "end_char_pos": 1324}], "sents_char_pos": [0, 149, 228, 287, 418, 557, 652, 783, 928, 1089, 1200]} {"doc_id": "1711.04392", "revision_depth": "1", "before_revision": "It has been well known in financial economics that factor betas depend on observed instruments such as firm specific characteristics and macroeconomic variables, and a key object of interest is the effect of instruments on the factor betas. One of the key features of our model is that we specify the factor betas as functions of time-varying observed instruments that pick up long-run beta fluctuations, plus an orthogonal idiosyncratic component that captures high-frequency movements in beta . It is often the case that researchers do not know whether or not the idiosyncratic beta exists, or its strengths, and thus uniformity is essential for inferences. It is found that the limiting distribution of the estimated instrument effect has a discontinuity when the strength of the idiosyncratic beta is near zero , which makes usual inferences fail to be valid and produce misleading results. In addition, the usual \"plug-in\" method using the estimated asymptotic variance is only valid pointwise . The central goal is to make inference about the effect on the betas of firms' instruments, and to conduct out-of-sample forecast of integrated volatilities using estimated factors. Both procedures should be valid uniformly over a broad class of data generating processes for idiosyncratic betas with various signal strengths and degrees of time-variant. We show that a cross-sectional bootstrap procedure is essential for the uniform inference, and our procedure also features a bias correction for the effect of estimating unknown factors.", "after_revision": "We consider continuous-time models with a large panel of moment conditions, where the structural parameter depends on a set of characteristics, whose effects are of interest. The leading example is the linear factor model in financial economics where factor betas depend on observed characteristics such as firm specific instruments and macroeconomic variables, and their effects pick up long-run time-varying beta fluctuations. We specify the factor betas as the sum of characteristic effects and an orthogonal idiosyncratic parameter that captures high-frequency movements . It is often the case that researchers do not know whether or not the latter exists, or its strengths, and thus the inference about the characteristic effects should be valid uniformly over a broad class of data generating processes for idiosyncratic parameters. We construct our estimation and inference in a two-step continuous-time GMM framework. It is found that the limiting distribution of the estimated characteristic effects has a discontinuity when the variance of the idiosyncratic parameter is near the boundary (zero) , which makes the usual \"plug-in\" method using the estimated asymptotic variance only valid pointwise and may produce either over- or under- coveraging probabilities. We show that the uniformity can be achieved by cross-sectional bootstrap . Our procedure allows both known and estimated factors, and also features a bias correction for the effect of estimating unknown factors.", "edit_actions": [{"type": "R", "before": "It has been well known", "after": "We consider continuous-time models with a large panel of moment conditions, where the structural parameter depends on a set of characteristics, whose effects are of interest. The leading example is the linear factor model", "start_char_pos": 0, "end_char_pos": 22}, {"type": "R", "before": "that", "after": "where", "start_char_pos": 46, "end_char_pos": 50}, {"type": "R", "before": "instruments", "after": "characteristics", "start_char_pos": 83, "end_char_pos": 94}, {"type": "R", "before": "characteristics", "after": "instruments", "start_char_pos": 117, "end_char_pos": 132}, {"type": "R", "before": "a key object of interest is the effect of instruments on the factor betas. One of the key features of our model is that we", "after": "their effects pick up long-run time-varying beta fluctuations. We", "start_char_pos": 166, "end_char_pos": 288}, {"type": "R", "before": "functions of time-varying observed instruments that pick up long-run beta fluctuations, plus", "after": "the sum of characteristic effects and", "start_char_pos": 317, "end_char_pos": 409}, {"type": "R", "before": "component", "after": "parameter", "start_char_pos": 438, "end_char_pos": 447}, {"type": "D", "before": "in beta", "after": null, "start_char_pos": 487, "end_char_pos": 494}, {"type": "R", "before": "idiosyncratic beta", "after": "latter", "start_char_pos": 566, "end_char_pos": 584}, {"type": "R", "before": "uniformity is essential for inferences.", "after": "the inference about the characteristic effects should be valid uniformly over a broad class of data generating processes for idiosyncratic parameters. We construct our estimation and inference in a two-step continuous-time GMM framework.", "start_char_pos": 620, "end_char_pos": 659}, {"type": "R", "before": "instrument effect", "after": "characteristic effects", "start_char_pos": 720, "end_char_pos": 737}, {"type": "R", "before": "strength", "after": "variance", "start_char_pos": 767, "end_char_pos": 775}, {"type": "R", "before": "beta is near zero", "after": "parameter is near the boundary (zero)", "start_char_pos": 797, "end_char_pos": 814}, {"type": "D", "before": "usual inferences fail to be valid and produce misleading results. In addition,", "after": null, "start_char_pos": 829, "end_char_pos": 907}, {"type": "D", "before": "is", "after": null, "start_char_pos": 975, "end_char_pos": 977}, {"type": "R", "before": ". The central goal is to make inference about the effect on the betas of firms' instruments, and to conduct out-of-sample forecast of integrated volatilities using estimated factors. Both procedures should be valid uniformly over a broad class of data generating processes for idiosyncratic betas with various signal strengths and degrees of time-variant.", "after": "and may produce either over- or under- coveraging probabilities.", "start_char_pos": 999, "end_char_pos": 1354}, {"type": "R", "before": "a", "after": "the uniformity can be achieved by", "start_char_pos": 1368, "end_char_pos": 1369}, {"type": "R", "before": "procedure is essential for the uniform inference, and our procedure", "after": ". Our procedure allows both known and estimated factors, and", "start_char_pos": 1396, "end_char_pos": 1463}], "sents_char_pos": [0, 240, 496, 659, 894, 1000, 1181, 1354]} {"doc_id": "1711.09877", "revision_depth": "1", "before_revision": "Given the recent trend towards validating the neuroimaging statistical methods, we compared the most popular functional magnetic resonance imaging (fMRI) analysis softwares : AFNI, FSL and SPM, with regard to temporal autocorrelation modelling . We used both resting state and task-based fMRI data, altogether 10 datasets containing 780 scans corresponding to different scanning sequences and different subject populations. In analyses of each fMRI scan we considered different assumed experimental designs , as well as different spatial smoothing levelsand different detrending options . For data used as null data the use of FSL and SPM resulted in much higher false positive rates than the use of AFNI. On the other hand, due to SPM modelling temporal autocorrelation in the least flexible way, it can introduce negative autocorrelations during pre-whitening for scans with long repetition times. For one dataset we observed a big loss of sensitivity when SPM was used. Interestingly, because pre-whitening in FSL and SPM does not remove a substantial part of the temporal autocorrelation in the noise, we observed a relationship that the lower the assumed experimental design frequency, the more likely it was to observe significant activation. Though temporal autocorrelation modelling in AFNI was not perfect, its performance was much higher than the performance of temporal autocorrelation modelling in FSL and SPM. FSL and SPM could improve their autocorrelation modelling approaches for example adopting a noise model similar to the one used by AFNI.", "after_revision": "Given the recent controversies in some neuroimaging statistical methods, we compared the most frequently used functional Magnetic Resonance Imaging (fMRI) analysis packages : AFNI, FSL and SPM, with regard to temporal autocorrelation modeling . We used both resting state and task-based fMRI data, altogether ten datasets containing 780 scans corresponding to different scanning sequences and subject populations. In analyses of each fMRI scan we considered different assumed experimental designs and smoothing levels . For data with no expected experimentally-induced activation, FSL and SPM resulted in much higher false positive rates than AFNI. We showed it was because of residual positive autocorrelation left after pre-whitening. On the other hand, due to SPM modeling temporal autocorrelation in the least flexible way, it can introduce negative autocorrelations during pre-whitening for scans with long repetition times. As a result, for one task-based dataset we observed a large loss of sensitivity when SPM was used. Interestingly, because pre-whitening in FSL and SPM does not remove a substantial part of the temporal autocorrelation in the noise, we found a strong relationship in which the lower the assumed experimental design frequency, the more likely it was to observe significant activation. Though temporal autocorrelation modeling in AFNI was not perfect, its performance was much higher than the performance of temporal autocorrelation modeling in FSL and SPM. FSL and SPM could improve their autocorrelation modeling approaches for example by adopting a noise model similar to the one used by AFNI.", "edit_actions": [{"type": "R", "before": "trend towards validating the", "after": "controversies in some", "start_char_pos": 17, "end_char_pos": 45}, {"type": "R", "before": "popular functional magnetic resonance imaging", "after": "frequently used functional Magnetic Resonance Imaging", "start_char_pos": 101, "end_char_pos": 146}, {"type": "R", "before": "softwares", "after": "packages", "start_char_pos": 163, "end_char_pos": 172}, {"type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 234, "end_char_pos": 243}, {"type": "R", "before": "10", "after": "ten", "start_char_pos": 310, "end_char_pos": 312}, {"type": "D", "before": "different", "after": null, "start_char_pos": 393, "end_char_pos": 402}, {"type": "R", "before": ", as well as different spatial smoothing levelsand different detrending options", "after": "and smoothing levels", "start_char_pos": 507, "end_char_pos": 586}, {"type": "R", "before": "used as null data the use of", "after": "with no expected experimentally-induced activation,", "start_char_pos": 598, "end_char_pos": 626}, {"type": "R", "before": "the use of AFNI.", "after": "AFNI. We showed it was because of residual positive autocorrelation left after pre-whitening.", "start_char_pos": 689, "end_char_pos": 705}, {"type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 736, "end_char_pos": 745}, {"type": "R", "before": "For one", "after": "As a result, for one task-based", "start_char_pos": 900, "end_char_pos": 907}, {"type": "R", "before": "big", "after": "large", "start_char_pos": 930, "end_char_pos": 933}, {"type": "R", "before": "observed a relationship that", "after": "found a strong relationship in which", "start_char_pos": 1109, "end_char_pos": 1137}, {"type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 1281, "end_char_pos": 1290}, {"type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 1397, "end_char_pos": 1406}, {"type": "R", "before": "modelling", "after": "modeling", "start_char_pos": 1471, "end_char_pos": 1480}, {"type": "A", "before": null, "after": "by", "start_char_pos": 1504, "end_char_pos": 1504}], "sents_char_pos": [0, 245, 423, 588, 705, 899, 972, 1248]} {"doc_id": "1712.02150", "revision_depth": "2", "before_revision": "An important part of system modeling is determining parameter values, particularly for biomolecular systems, where direct measurements of individual parameters is often hard. While extended Kalman filters have been used for this purpose, the choice of the process noise covariance is generally unclear. Here , we address this issue for biomolecular systems using a combination of Monte Carlo simulations and experimental data, exploiting the dependence of the process noise on the states and parameters, as given in the Langevin framework. We generalize a hybrid extended Kalman filtering technique by updating the estimate-dependent process noise covariance at each time step . We compare the performance of this framework with different fixed values of process noise covariance in biomolecular system models, including an oscillator model, as well as in experimentally measured data for a negative transcriptional feedback circuit. We find that the extended Kalman filter with such process noise covariance update can be optimal in the sense that the innovation sequence becomes white and in achieving balance between the mean square estimation error and parameter convergence time. These results may help in the use of extended Kalman filters for systems with process noise covariance that depends on states and/or parameters.", "after_revision": "An important part of system modeling is determining parameter values, particularly for biomolecular systems, where direct measurements of individual parameters are typically hard. While Extended Kalman Filters have been used for this purpose, the choice of the process noise covariance is generally unclear. In this chapter , we address this issue for biomolecular systems using a combination of Monte Carlo simulations and experimental data, exploiting the dependence of the process noise covariance on the states and parameters, as given in the Langevin framework. We adapt a Hybrid Extended Kalman Filtering technique by updating the process noise covariance at each time step based on estimates . We compare the performance of this framework with different fixed values of process noise covariance in biomolecular system models, including an oscillator model, as well as in experimentally measured data for a negative transcriptional feedback circuit. We find that the Extended Kalman Filter with such process noise covariance update is closer to the optimality condition in the sense that the innovation sequence becomes white and in achieving a balance between the mean square estimation error and parameter convergence time. The results of this chapter may help in the use of Extended Kalman Filters for systems where process noise covariance depends on states and/or parameters.", "edit_actions": [{"type": "R", "before": "is often", "after": "are typically", "start_char_pos": 160, "end_char_pos": 168}, {"type": "R", "before": "extended Kalman filters", "after": "Extended Kalman Filters", "start_char_pos": 181, "end_char_pos": 204}, {"type": "R", "before": "Here", "after": "In this chapter", "start_char_pos": 303, "end_char_pos": 307}, {"type": "A", "before": null, "after": "covariance", "start_char_pos": 474, "end_char_pos": 474}, {"type": "R", "before": "generalize a hybrid extended Kalman filtering", "after": "adapt a Hybrid Extended Kalman Filtering", "start_char_pos": 544, "end_char_pos": 589}, {"type": "D", "before": "estimate-dependent", "after": null, "start_char_pos": 616, "end_char_pos": 634}, {"type": "A", "before": null, "after": "based on estimates", "start_char_pos": 678, "end_char_pos": 678}, {"type": "R", "before": "extended Kalman filter", "after": "Extended Kalman Filter", "start_char_pos": 953, "end_char_pos": 975}, {"type": "R", "before": "can be optimal", "after": "is closer to the optimality condition", "start_char_pos": 1018, "end_char_pos": 1032}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1106, "end_char_pos": 1106}, {"type": "R", "before": "These results", "after": "The results of this chapter", "start_char_pos": 1188, "end_char_pos": 1201}, {"type": "R", "before": "extended Kalman filters for systems with", "after": "Extended Kalman Filters for systems where", "start_char_pos": 1225, "end_char_pos": 1265}, {"type": "D", "before": "that", "after": null, "start_char_pos": 1291, "end_char_pos": 1295}], "sents_char_pos": [0, 174, 302, 540, 680, 935, 1187]} {"doc_id": "1712.04064", "revision_depth": "1", "before_revision": "When studying flocking/swarming behaviors in animals one is interested in quantifying and comparing the dynamics of the clustering induced by the coalescence and disbanding of animals in different groups. Motivated by this, we study the problem of obtaining persistent homology based summaries of time-dependent metric data. Given a finite dynamic metric space (DMS ), we construct the zigzag simplicial filtration arising from applying the Rips simplicial complex construction (with a fixed scale parameter) to this finite DMS. Upon passing to 0-th homology with field coefficients, we obtain a zigzag persistence module and, based on standard results, we in turn obtain a persistence diagram or barcode from this zigzag persistence module. We prove that these barcodes are stable under perturbations in the input DMS. In order to formalize the notion of perturbation we introduce a suitable distance between DMSs and we then prove that the value of this distance between any two DMSs admits as a lower bound the bottleneck distance between the Rips barcodes associated to each of two input DMSs. This lower bound can be computed in polynomial time from the DMS inputs. Along the way, we propose a summarization of dynamic metric spaces that captures their time-dependent clustering features which we call formigrams. These set-valued functions generalize the notion of dendrogram, a prevalent tool for hierarchical clustering. In order to elucidate the relationship between our distance between two dynamic metric spaces and the bottleneck distance between their Rips zigzag barcodes, we exploit recent advances in the stability of zigzag persistence ( due to Botnan and Lesnick ). By providing explicit constructions, we prove that for each integer k\\geq 1 there exist pairs of DMSs at finite interleaving distance whose k-th persistent homology barcodes are at infinite barcode distance .", "after_revision": "When studying flocking/swarming behaviors in animals one is interested in quantifying and comparing the dynamics of the clustering induced by the coalescence and disbanding of animals in different groups. In a similar vein, studying the dynamics of social networks leads to the problem of characterizing groups/communities as they form and disperse throughout time. Motivated by this, we study the problem of obtaining persistent homology based summaries of time-dependent data. Given a finite dynamic graph (DG ), we first construct a zigzag persistence module arising from linearizing the dynamic transitive graph naturally induced from the input DG. Based on standard results, we then obtain a persistence diagram or barcode from this zigzag persistence module. We prove that these barcodes are stable under perturbations in the input DG under a suitable distance between DGs that we identify. More precisely, our stability theorem can be interpreted as providing a lower bound for the distance between DGs. Since it relies on barcodes, and their bottleneck distance, this lower bound can be computed in polynomial time from the DG inputs. Since DGs can be given rise by applying the Rips functor (with a fixed threshold) to dynamic metric spaces, we are also able to derive related stable invariants for these richer class of dynamic objects. Along the way, we propose a summarization of dynamic graphs that captures their time-dependent clustering features which we call formigrams. These set-valued functions generalize the notion of dendrogram, a prevalent tool for hierarchical clustering. In order to elucidate the relationship between our distance between two DGs and the bottleneck distance between their associated barcodes, we exploit recent advances in the stability of zigzag persistence due to Botnan and Lesnick , and to Bjerkevik .", "edit_actions": [{"type": "A", "before": null, "after": "In a similar vein, studying the dynamics of social networks leads to the problem of characterizing groups/communities as they form and disperse throughout time.", "start_char_pos": 205, "end_char_pos": 205}, {"type": "D", "before": "metric", "after": null, "start_char_pos": 313, "end_char_pos": 319}, {"type": "R", "before": "metric space (DMS", "after": "graph (DG", "start_char_pos": 349, "end_char_pos": 366}, {"type": "R", "before": "construct the zigzag simplicial filtration arising from applying the Rips simplicial complex construction (with a fixed scale parameter) to this finite DMS. Upon passing to 0-th homology with field coefficients, we obtain a zigzag persistence module and, based", "after": "first construct a zigzag persistence module arising from linearizing the dynamic transitive graph naturally induced from the input DG. Based", "start_char_pos": 373, "end_char_pos": 633}, {"type": "R", "before": "in turn", "after": "then", "start_char_pos": 658, "end_char_pos": 665}, {"type": "R", "before": "DMS. In order to formalize the notion of perturbation we introduce", "after": "DG under", "start_char_pos": 816, "end_char_pos": 882}, {"type": "R", "before": "DMSs and we then prove that the value of this distance between any two DMSs admits as", "after": "DGs that we identify. More precisely, our stability theorem can be interpreted as providing", "start_char_pos": 911, "end_char_pos": 996}, {"type": "R", "before": "the bottleneck distance between the Rips barcodes associated to each of two input DMSs. This", "after": "for the distance between DGs. Since it relies on barcodes, and their bottleneck distance, this", "start_char_pos": 1011, "end_char_pos": 1103}, {"type": "R", "before": "DMS inputs.", "after": "DG inputs. Since DGs can be given rise by applying the Rips functor (with a fixed threshold) to dynamic metric spaces, we are also able to derive related stable invariants for these richer class of dynamic objects.", "start_char_pos": 1160, "end_char_pos": 1171}, {"type": "R", "before": "metric spaces", "after": "graphs", "start_char_pos": 1225, "end_char_pos": 1238}, {"type": "R", "before": "dynamic metric spaces", "after": "DGs", "start_char_pos": 1502, "end_char_pos": 1523}, {"type": "R", "before": "Rips zigzag", "after": "associated", "start_char_pos": 1566, "end_char_pos": 1577}, {"type": "D", "before": "(", "after": null, "start_char_pos": 1654, "end_char_pos": 1655}, {"type": "R", "before": "). By providing explicit constructions, we prove that for each integer k\\geq 1 there exist pairs of DMSs at finite interleaving distance whose k-th persistent homology barcodes are at infinite barcode distance", "after": ", and to Bjerkevik", "start_char_pos": 1682, "end_char_pos": 1891}], "sents_char_pos": [0, 204, 325, 742, 1098, 1171, 1319, 1429, 1684]} {"doc_id": "1712.04912", "revision_depth": "1", "before_revision": "We develop a general class of two-step algorithms for heterogeneous treatment effect estimation in observational studies. We first estimate marginal effects and treatment propensities to form an objective function that isolates the heterogeneous treatment effects, and then optimize the learned objective . This approach has several advantages over existing methods. From a practical perspective, our method is very flexible and easy to use: In both steps, we can use any method of our choice , e.g., penalized regression, a deep net , or boosting; moreover, these methods can be fine-tuned by cross-validating on the learned objective . Meanwhile, in the case of penalized kernel regression, we show that our method has a quasi-oracle property , whereby even if our pilot estimates for marginal effects and treatment propensities are not particularly accurate, we achieve the same regret bounds as an oracle who has a-priori knowledge of these nuisance components. We implement variants of our method based on both penalized regression and convolutional neural networks , and find promising performance relative to existing baselines.", "after_revision": "Flexible estimation of heterogeneous treatment effects lies at the heart of many statistical challenges, such as personalized medicine and optimal resource allocation. In this paper, we develop a general class of two-step algorithms for heterogeneous treatment effect estimation in observational studies. We first estimate marginal effects and treatment propensities in order to form an objective function that isolates the causal component of the signal. Then, we optimize this data-adaptive objective function. Our approach has several advantages over existing methods. From a practical perspective, our method is flexible and easy to use: For both steps, we can use any loss-minimization method , e.g., penalized regression, deep neutral networks , or boosting; moreover, these methods can be fine-tuned by cross validation . Meanwhile, in the case of penalized kernel regression, we show that our method has a quasi-oracle property : Even if the pilot estimates for marginal effects and treatment propensities are not particularly accurate, we achieve the same regret bounds as an oracle who has a priori knowledge of these two nuisance components. We implement variants of our approach based on both penalized regression and boosting in a variety of simulation setups , and find promising performance relative to existing baselines.", "edit_actions": [{"type": "R", "before": "We", "after": "Flexible estimation of heterogeneous treatment effects lies at the heart of many statistical challenges, such as personalized medicine and optimal resource allocation. In this paper, we", "start_char_pos": 0, "end_char_pos": 2}, {"type": "A", "before": null, "after": "in order", "start_char_pos": 184, "end_char_pos": 184}, {"type": "R", "before": "heterogeneous treatment effects, and then optimize the learned objective . This", "after": "causal component of the signal. Then, we optimize this data-adaptive objective function. Our", "start_char_pos": 233, "end_char_pos": 312}, {"type": "D", "before": "very", "after": null, "start_char_pos": 412, "end_char_pos": 416}, {"type": "R", "before": "In", "after": "For", "start_char_pos": 443, "end_char_pos": 445}, {"type": "R", "before": "method of our choice", "after": "loss-minimization method", "start_char_pos": 473, "end_char_pos": 493}, {"type": "R", "before": "a deep net", "after": "deep neutral networks", "start_char_pos": 524, "end_char_pos": 534}, {"type": "R", "before": "cross-validating on the learned objective", "after": "cross validation", "start_char_pos": 595, "end_char_pos": 636}, {"type": "R", "before": ", whereby even if our", "after": ": Even if the", "start_char_pos": 746, "end_char_pos": 767}, {"type": "R", "before": "a-priori", "after": "a priori", "start_char_pos": 918, "end_char_pos": 926}, {"type": "A", "before": null, "after": "two", "start_char_pos": 946, "end_char_pos": 946}, {"type": "R", "before": "method", "after": "approach", "start_char_pos": 997, "end_char_pos": 1003}, {"type": "R", "before": "convolutional neural networks", "after": "boosting in a variety of simulation setups", "start_char_pos": 1043, "end_char_pos": 1072}], "sents_char_pos": [0, 121, 307, 367, 549, 638, 967]} {"doc_id": "1712.08644", "revision_depth": "1", "before_revision": "We present DeepPicar, a low-cost deep neural network (DNN) based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called Dave-2 by NVIDIA, which drove on public roads using a deep convolutional neural network (CNN), that takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses almost the exact same network architecture---9 layers, 27 million connections and 250K parameters---and can be trained to drive itself , in real-time , using a web camera and a modest Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN based real-time control software as a workload. We find all tested platforms, including the Pi 3, are capable of supporting deep-learning based real-time control, from 20 Hz up to 100 Hz depending on hardware platform. However, shared resource contention remains an important issue that must be considered in applying deep-learning models on shared memory based embedded computing platforms .", "after_revision": "We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture---9 layers, 27 million connections and 250K parameters---and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz , depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms ; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution .", "edit_actions": [{"type": "D", "before": "(DNN)", "after": null, "start_char_pos": 53, "end_char_pos": 58}, {"type": "R", "before": "Dave-2 by NVIDIA, which drove on public roads using", "after": "DAVE-2 by NVIDIA. DAVE-2 uses", "start_char_pos": 163, "end_char_pos": 214}, {"type": "R", "before": "that", "after": "which", "start_char_pos": 258, "end_char_pos": 262}, {"type": "R", "before": "almost the exact", "after": "the", "start_char_pos": 371, "end_char_pos": 387}, {"type": "R", "before": "be trained to drive itself ,", "after": "drive itself", "start_char_pos": 479, "end_char_pos": 507}, {"type": "D", "before": ",", "after": null, "start_char_pos": 521, "end_char_pos": 522}, {"type": "D", "before": "modest", "after": null, "start_char_pos": 548, "end_char_pos": 554}, {"type": "R", "before": "CNN based", "after": "CNN-based", "start_char_pos": 837, "end_char_pos": 846}, {"type": "D", "before": "software as a", "after": null, "start_char_pos": 865, "end_char_pos": 878}, {"type": "A", "before": null, "after": "that", "start_char_pos": 897, "end_char_pos": 897}, {"type": "R", "before": "deep-learning based", "after": "the CNN-based", "start_char_pos": 966, "end_char_pos": 985}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1029, "end_char_pos": 1029}, {"type": "A", "before": null, "after": "we find that", "start_char_pos": 1071, "end_char_pos": 1071}, {"type": "R", "before": "deep-learning", "after": "CNN", "start_char_pos": 1162, "end_char_pos": 1175}, {"type": "A", "before": null, "after": "; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution", "start_char_pos": 1235, "end_char_pos": 1235}], "sents_char_pos": [0, 89, 355, 589, 735, 888, 1061]} {"doc_id": "1801.01533", "revision_depth": "1", "before_revision": "Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by gradual memory and cognitive function loss. Currently, there are no effective treatments that can reverse or stabilize the symptoms of the disease . Anti-amyloid beta (ABeta) antibodies are the leading drug candidates to treat AD, but the results of clinical trials over the past two decades have been mostly disappointing. This was due to either low effectiveness or serious side effects that emerged during clinical trials. Introducing rational mutations into anti-ABeta antibodies to increase their effectiveness and remove harmful side reactions is a way forward, but the path to take is unclear since the key structural characteristics that determine the binding affinity and selectivity of anti-ABeta antibodies towards amyloid species are not well understood . In this study, we have taken a computational approach to examine how an anti-ABeta antibody binds to one or more ABeta epitopes in its antigen-combining site. Our unbiased fragment-based docking method successfully predicted the emergence of the key EFRH epitopecommonly observed when ABeta binds to anti-ABeta antibodies. MD simulations coupled with MMPBSA binding free energy calculations were used to analyze scenarios described in prior studies concerning the nature of anti-ABeta antibody binding to various ABeta species. Finally, based on observing our MD trajectories of the PFA1 antibody bound to ABeta_2-7 and pE3ABeta_3-8 , we introduced rational mutations into PFA1 in an effort to improve the calculated binding affinity of PFA1 towards the pE3-ABeta_3-8 form of ABeta. Two out of our four proposed mutations stabilized binding. Our study demonstrates that a computational approach can predict beneficial mutations which may lead to an improved drug candidate for AD in the future.", "after_revision": "Alzheimer's Disease (AD) is a neurodegenerative disorder that lacks effective treatment options . Anti-amyloid beta (ABeta) antibodies are the leading drug candidates to treat AD, but the results of clinical trials have been disappointing. Introducing rational mutations into anti-ABeta antibodies to increase their effectiveness is a way forward, but the path to take is unclear . In this study, we demonstrate the use of computational fragment-based docking and MMPBSA binding free energy calculations in the analysis of anti-ABeta antibodies for rational drug design efforts. Our fragment-based docking method successfully predicted the emergence of the common EFRH epitope, MD simulations coupled with MMPBSA binding free energy calculations were used to analyze scenarios described in prior studies , and we introduced rational mutations into PFA1 to improve its calculated binding affinity towards the pE3-ABeta3-8 form of ABeta. Two out of four proposed mutations stabilized binding. Our study demonstrates that a computational approach may lead to an improved drug candidate for AD in the future.", "edit_actions": [{"type": "R", "before": "characterized by gradual memory and cognitive function loss. Currently, there are no effective treatments that can reverse or stabilize the symptoms of the disease", "after": "that lacks effective treatment options", "start_char_pos": 57, "end_char_pos": 220}, {"type": "R", "before": "over the past two decades have been mostly disappointing. This was due to either low effectiveness or serious side effects that emerged during clinical trials.", "after": "have been disappointing.", "start_char_pos": 340, "end_char_pos": 499}, {"type": "D", "before": "and remove harmful side reactions", "after": null, "start_char_pos": 590, "end_char_pos": 623}, {"type": "D", "before": "since the key structural characteristics that determine the binding affinity and selectivity of anti-ABeta antibodies towards amyloid species are not well understood", "after": null, "start_char_pos": 674, "end_char_pos": 839}, {"type": "R", "before": "have taken a computational approach to examine how an", "after": "demonstrate the use of computational fragment-based docking and MMPBSA binding free energy calculations in the analysis of", "start_char_pos": 860, "end_char_pos": 913}, {"type": "R", "before": "antibody binds to one or more ABeta epitopes in its antigen-combining site. Our unbiased", "after": "antibodies for rational drug design efforts. Our", "start_char_pos": 925, "end_char_pos": 1013}, {"type": "R", "before": "key EFRH epitopecommonly observed when ABeta binds to anti-ABeta antibodies.", "after": "common EFRH epitope,", "start_char_pos": 1088, "end_char_pos": 1164}, {"type": "D", "before": "concerning the nature of anti-ABeta antibody binding to various ABeta species. Finally, based on observing our MD trajectories of the PFA1 antibody bound to ABeta_2-7 and pE3ABeta_3-8", "after": null, "start_char_pos": 1291, "end_char_pos": 1474}, {"type": "A", "before": null, "after": "and", "start_char_pos": 1477, "end_char_pos": 1477}, {"type": "R", "before": "in an effort to improve the", "after": "to improve its", "start_char_pos": 1521, "end_char_pos": 1548}, {"type": "R", "before": "of PFA1 towards the pE3-ABeta_3-8", "after": "towards the pE3-ABeta3-8", "start_char_pos": 1577, "end_char_pos": 1610}, {"type": "D", "before": "our", "after": null, "start_char_pos": 1637, "end_char_pos": 1640}, {"type": "D", "before": "can predict beneficial mutations which", "after": null, "start_char_pos": 1738, "end_char_pos": 1776}], "sents_char_pos": [0, 117, 222, 397, 499, 841, 1000, 1164, 1369, 1625, 1684]} {"doc_id": "1801.03681", "revision_depth": "1", "before_revision": "There has been growing evidence that cooperative interactions and configurational rearrangements underpin protein functions. But in spite of vast genetic and structural data, the information-dense, heterogeneous nature of protein has held back the progress in understanding the underlying principles. Here we outline a general theory of protein that quantitatively links sequence, dynamics and function: The protein is a strongly-coupled amino acid network whose interactions and large-scale motions are captured by the mechanical propagator, also known as the Green function. The propagator relates the gene to the connectivity of the amino acid network and the transmission of forces through the protein. How well the force pattern conforms to the collective modes of the functional protein is measured by the fitness. Mutations introduce localized perturbations to the propagator which scatter the force field. The emergence of function is manifested by a topological transition when a band of such perturbations divides the protein into subdomains. Epistasis quantifies how much the combined effect of multiple mutations departs from additivity. We find that epistasis is the nonlinearity of the Green function, which corresponds to a sum over multiple scattering paths passing through the localized perturbations . We apply this mechanical framework to the simulations of protein evolution, and observe long-range epistasis which facilitates collective functional modes . Our model lays the foundation for understanding the protein as an evolved state of matter and may be a prototype for other strongly-correlated living systems .", "after_revision": "The function of proteins arises from cooperative interactions and rearrangements of their amino acids, which exhibit large-scale dynamical modes. Long-range correlations have also been revealed in protein sequences, and this has motivated the search for physical links between the observed genetic and dynamic cooperativity. We outline here a simplified theory of protein , which relates sequence correlations to physical interactions and to the emergence of mechanical function. Our protein is modeled as a strongly-coupled amino acid network whose interactions and motions are captured by the mechanical propagator, the Green function. The propagator describes how the gene determines the connectivity of the amino acids, and thereby the transmission of forces . Mutations introduce localized perturbations to the propagator which scatter the force field. The emergence of function is manifested by a topological transition when a band of such perturbations divides the protein into subdomains. We find that epistasis -- the interaction among mutations in the gene -- is related to the nonlinearity of the Green function, which can be interpreted as a sum over multiple scattering paths . We apply this mechanical framework to simulations of protein evolution, and observe long-range epistasis which facilitates collective functional modes .", "edit_actions": [{"type": "R", "before": "There has been growing evidence that", "after": "The function of proteins arises from", "start_char_pos": 0, "end_char_pos": 36}, {"type": "R", "before": "configurational rearrangements underpin protein functions. But in spite of vast genetic and structural data, the information-dense, heterogeneous nature of protein has held back the progress in understanding the underlying principles. Here we outline a general", "after": "rearrangements of their amino acids, which exhibit large-scale dynamical modes. Long-range correlations have also been revealed in protein sequences, and this has motivated the search for physical links between the observed genetic and dynamic cooperativity. We outline here a simplified", "start_char_pos": 66, "end_char_pos": 326}, {"type": "R", "before": "that quantitatively links sequence, dynamics and function: The protein is", "after": ", which relates sequence correlations to physical interactions and to the emergence of mechanical function. Our protein is modeled as", "start_char_pos": 345, "end_char_pos": 418}, {"type": "D", "before": "large-scale", "after": null, "start_char_pos": 480, "end_char_pos": 491}, {"type": "D", "before": "also known as", "after": null, "start_char_pos": 543, "end_char_pos": 556}, {"type": "R", "before": "relates the gene to", "after": "describes how the gene determines", "start_char_pos": 592, "end_char_pos": 611}, {"type": "R", "before": "acid network and", "after": "acids, and thereby", "start_char_pos": 642, "end_char_pos": 658}, {"type": "R", "before": "through the protein. How well the force pattern conforms to the collective modes of the functional protein is measured by the fitness.", "after": ".", "start_char_pos": 686, "end_char_pos": 820}, {"type": "D", "before": "Epistasis quantifies how much the combined effect of multiple mutations departs from additivity.", "after": null, "start_char_pos": 1053, "end_char_pos": 1149}, {"type": "R", "before": "is", "after": "-- the interaction among mutations in the gene -- is related to", "start_char_pos": 1173, "end_char_pos": 1175}, {"type": "R", "before": "corresponds to", "after": "can be interpreted as", "start_char_pos": 1222, "end_char_pos": 1236}, {"type": "D", "before": "passing through the localized perturbations", "after": null, "start_char_pos": 1274, "end_char_pos": 1317}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1358, "end_char_pos": 1361}, {"type": "D", "before": ". Our model lays the foundation for understanding the protein as an evolved state of matter and may be a prototype for other strongly-correlated living systems", "after": null, "start_char_pos": 1475, "end_char_pos": 1634}], "sents_char_pos": [0, 124, 300, 576, 706, 820, 913, 1052, 1149, 1319, 1476]} {"doc_id": "1801.10088", "revision_depth": "1", "before_revision": "We propose a dynamic model for the stability of a large financial network , which we formulate as a system of interacting diffusions on the positive half-line with an absorbing boundary at zero. These diffusions represent the distances-to-default of the financial institutions in the network . As a way of modelling correlated exposures and herd behaviour, we consider a common source of noise and a form of mean-reversion in the drift. Moreover, we introduce an endogenous contagion mechanism whereby the default of one institution can lead to a drop in the distances-to-default of the other institutions. In order to have a general model for systemic (or macroscopic) events, we show that the above system converges to a unique mean field limit , which is characterized by a nonlinear SPDE on the half-line with a Dirichlet boundary condition . Depending on the realisations of the common noise and the strength of the mean reversion, this SPDE can exhibit rapid accelerations in the loss of mass at the boundary. In other words, there are events of small probability that can give rise to systemic default cascades sparked by a devaluation of the common exposures and amplified by herd behaviour .", "after_revision": "We propose a dynamic model for systemic risk in a large financial system , which we formulate as a system of interacting diffusions on the positive half-line with an absorbing boundary at zero. These diffusions represent the distances-to-default of the financial institutions . As a way of modelling correlated exposures and herd behaviour, we consider a common source of noise and a form of mean-reversion in the drift. Moreover, we introduce an endogenous contagion mechanism whereby the default of one institution can cause a drop in the distances-to-default of the other institutions. In order to have a general model for systemic (or macroscopic) events, we show that the system converges to a unique mean field limit characterized by a nonlinear SPDE on the half-line ( with a Dirichlet boundary condition ) which governs the conditional law of a 'conditional McKean-Vlasov' type diffusion . Depending on the realizations of the common noise and the strength of the mean reversion, the SPDE can exhibit rapid accelerations in the loss of mass at the boundary. In other words, sparked by a devaluation of the common exposures, there are events of small probability that , through amplification by herd behaviour, can give rise to systemic default cascades .", "edit_actions": [{"type": "R", "before": "the stability of", "after": "systemic risk in", "start_char_pos": 31, "end_char_pos": 47}, {"type": "R", "before": "network", "after": "system", "start_char_pos": 66, "end_char_pos": 73}, {"type": "D", "before": "in the network", "after": null, "start_char_pos": 277, "end_char_pos": 291}, {"type": "R", "before": "lead to", "after": "cause", "start_char_pos": 537, "end_char_pos": 544}, {"type": "D", "before": "above", "after": null, "start_char_pos": 695, "end_char_pos": 700}, {"type": "D", "before": ", which is", "after": null, "start_char_pos": 747, "end_char_pos": 757}, {"type": "A", "before": null, "after": "(", "start_char_pos": 809, "end_char_pos": 809}, {"type": "A", "before": null, "after": ") which governs the conditional law of a 'conditional McKean-Vlasov' type diffusion", "start_char_pos": 846, "end_char_pos": 846}, {"type": "R", "before": "realisations", "after": "realizations", "start_char_pos": 866, "end_char_pos": 878}, {"type": "R", "before": "this", "after": "the", "start_char_pos": 939, "end_char_pos": 943}, {"type": "A", "before": null, "after": "sparked by a devaluation of the common exposures,", "start_char_pos": 1034, "end_char_pos": 1034}, {"type": "A", "before": null, "after": ", through amplification by herd behaviour,", "start_char_pos": 1078, "end_char_pos": 1078}, {"type": "D", "before": "sparked by a devaluation of the common exposures and amplified by herd behaviour", "after": null, "start_char_pos": 1122, "end_char_pos": 1202}], "sents_char_pos": [0, 194, 293, 436, 606, 848, 1017]} {"doc_id": "1802.03405", "revision_depth": "1", "before_revision": "Differential equations with distributional sources---in particular, involving delta distributions and/or derivatives thereof---have become increasingly ubiquitous in numerous areas of physics and applied mathematics. It is often of considerable interest to obtain numerical solutions for such equations, but the singular (\" point-like \" ) modeling of the sources in these problems typically introduces nontrivial obstacles for devising a satisfactory numerical implementation . A common method to circumvent these is through some form of delta function approximation procedure on the computational grid , yet this strategy often carries significant limitations . In this paper, we present an alternative technique for tackling such equations : the \"Particle-without-Particle\" method. Previously introduced in the context of the self-force problem in gravitational physics, the idea is to discretize the computational domain into two (or more) disjoint pseudospectral (Chebyshev-Lobatto) grids in such a way that the \"particle\" (the singular source location) is always at the interface between them; in this way , one only needs to solve homogeneous equations in each domain, with the source effectively replaced by jump (boundary) conditions thereon. We prove here that this method is applicable to any linear PDE (of arbitrary order) the source of which is a linear combination of one-dimensional delta distributions and derivatives thereof supported at an arbitrary number of particles. We furthermore apply this method to obtain numerical solutions for various types of distributionally-sourced PDEs: we consider first-order hyperbolic equations with applications to neuroscience models (describing neural populations ), parabolic equations with applications to financial models (describing price formation), second-order hyperbolic equations with applications to wave acoustics, and finally elliptic(Poisson) equations .", "after_revision": "Partial differential equations with distributional sources---in particular, involving (derivatives of) delta distributions---have become increasingly ubiquitous in numerous areas of physics and applied mathematics. It is often of considerable interest to obtain numerical solutions for such equations, but any singular (\" particle \" -like) source modeling invariably introduces nontrivial computational obstacles . A common method to circumvent these is through some form of delta function approximation procedure on the computational grid ; however, this often carries significant limitations on the efficiency of the numerical convergence rates, or sometimes even the resolvability of the problem at all . In this paper, we present an alternative technique for tackling such equations which avoids the singular behavior entirely : the \"Particle-without-Particle\" method. Previously introduced in the context of the self-force problem in gravitational physics, the idea is to discretize the computational domain into two (or more) disjoint pseudospectral (Chebyshev-Lobatto) grids such that the \"particle\" is always at the interface between them; thus , one only needs to solve homogeneous equations in each domain, with the source effectively replaced by jump (boundary) conditions thereon. We prove here that this method yields solutions to any linear PDE the source of which is any linear combination of delta distributions and derivatives thereof supported on a one-dimensional subspace of the problem domain. We then implement it to numerically solve a variety of relevant PDEs: hyperbolic ( with applications to neuroscience and acoustics ), parabolic ( with applications to finance), and elliptic. We generically obtain improved convergence rates relative to typical past implementations relying on delta function approximations .", "edit_actions": [{"type": "R", "before": "Differential", "after": "Partial differential", "start_char_pos": 0, "end_char_pos": 12}, {"type": "R", "before": "delta distributions and/or derivatives thereof---have", "after": "(derivatives of) delta distributions---have", "start_char_pos": 78, "end_char_pos": 131}, {"type": "R", "before": "the", "after": "any", "start_char_pos": 308, "end_char_pos": 311}, {"type": "R", "before": "point-like", "after": "particle", "start_char_pos": 324, "end_char_pos": 334}, {"type": "R", "before": ") modeling of the sources in these problems typically introduces nontrivial obstacles for devising a satisfactory numerical implementation", "after": "-like) source modeling invariably introduces nontrivial computational obstacles", "start_char_pos": 337, "end_char_pos": 475}, {"type": "R", "before": ", yet this strategy", "after": "; however, this", "start_char_pos": 603, "end_char_pos": 622}, {"type": "A", "before": null, "after": "on the efficiency of the numerical convergence rates, or sometimes even the resolvability of the problem at all", "start_char_pos": 661, "end_char_pos": 661}, {"type": "A", "before": null, "after": "which avoids the singular behavior entirely", "start_char_pos": 743, "end_char_pos": 743}, {"type": "R", "before": "in such a way", "after": "such", "start_char_pos": 995, "end_char_pos": 1008}, {"type": "D", "before": "(the singular source location)", "after": null, "start_char_pos": 1029, "end_char_pos": 1059}, {"type": "R", "before": "in this way", "after": "thus", "start_char_pos": 1101, "end_char_pos": 1112}, {"type": "R", "before": "is applicable", "after": "yields solutions", "start_char_pos": 1284, "end_char_pos": 1297}, {"type": "D", "before": "(of arbitrary order)", "after": null, "start_char_pos": 1316, "end_char_pos": 1336}, {"type": "R", "before": "a", "after": "any", "start_char_pos": 1360, "end_char_pos": 1361}, {"type": "D", "before": "one-dimensional", "after": null, "start_char_pos": 1384, "end_char_pos": 1399}, {"type": "R", "before": "at an arbitrary number of particles. We furthermore apply this method to obtain numerical solutions for various types of distributionally-sourced PDEs: we consider first-order hyperbolic equations", "after": "on a one-dimensional subspace of the problem domain. We then implement it to numerically solve a variety of relevant PDEs: hyperbolic (", "start_char_pos": 1454, "end_char_pos": 1650}, {"type": "R", "before": "models (describing neural populations", "after": "and acoustics", "start_char_pos": 1685, "end_char_pos": 1722}, {"type": "R", "before": "equations", "after": "(", "start_char_pos": 1736, "end_char_pos": 1745}, {"type": "R", "before": "financial models (describing price formation), second-order hyperbolic equations with applications to wave acoustics, and finally elliptic(Poisson) equations", "after": "finance), and elliptic. We generically obtain improved convergence rates relative to typical past implementations relying on delta function approximations", "start_char_pos": 1767, "end_char_pos": 1924}], "sents_char_pos": [0, 216, 477, 663, 785, 1100, 1252, 1490]} {"doc_id": "1802.03627", "revision_depth": "1", "before_revision": "Time series produced by dynamical systems as frequently the case in neuroscience are rarely stationary but often exhibit quite abrupt changes due to bifurcations or other dynamical phenomena . A plethora of methods for detecting such changes in time series statistics , commonly called change point analysis, have been developed over the years, in addition to test criteria to evaluate change point significance. Issues to consider when developing such methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many change points as contained in potentially high-dimensional data sets . Here, a general method called Paired Adaptive Regressors for Cumulative Sum (PARCS) is developed for detecting multiple change points in multivariate time series. The method's flexibility to incorporate useful features from other change point detection techniques is highlighted . The advantages of PARCS over existing approaches are demonstrated through a series of simulation experiments, followed by a real data application to neural recordings from rat medial prefrontal cortex during learning.", "after_revision": "Time series , as frequently the case in neuroscience , are rarely stationary , but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them . A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series . Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is highlighted , and its advantages over alternative approaches are demonstrated through a series of simulation experiments, along with potential drawbacks and suggestions to remedy them. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning.", "edit_actions": [{"type": "R", "before": "produced by dynamical systems", "after": ",", "start_char_pos": 12, "end_char_pos": 41}, {"type": "A", "before": null, "after": ",", "start_char_pos": 81, "end_char_pos": 81}, {"type": "A", "before": null, "after": ",", "start_char_pos": 104, "end_char_pos": 104}, {"type": "D", "before": "quite", "after": null, "start_char_pos": 123, "end_char_pos": 128}, {"type": "R", "before": "bifurcations or other dynamical phenomena", "after": "attractor transitions or bifurcations in the dynamical systems producing them", "start_char_pos": 151, "end_char_pos": 192}, {"type": "R", "before": "changes", "after": "change points", "start_char_pos": 236, "end_char_pos": 243}, {"type": "D", "before": ", commonly called change point analysis,", "after": null, "start_char_pos": 270, "end_char_pos": 310}, {"type": "R", "before": "change point", "after": "their", "start_char_pos": 388, "end_char_pos": 400}, {"type": "R", "before": "such", "after": "change point analysis", "start_char_pos": 450, "end_char_pos": 454}, {"type": "R", "before": "change points", "after": "changes", "start_char_pos": 657, "end_char_pos": 670}, {"type": "R", "before": "data sets", "after": "time series", "start_char_pos": 716, "end_char_pos": 725}, {"type": "D", "before": "(PARCS)", "after": null, "start_char_pos": 804, "end_char_pos": 811}, {"type": "A", "before": null, "after": "the mean of", "start_char_pos": 865, "end_char_pos": 865}, {"type": "R", "before": "other", "after": "state-of-the-art", "start_char_pos": 953, "end_char_pos": 958}, {"type": "R", "before": ". The advantages of PARCS over existing", "after": ", and its advantages over alternative", "start_char_pos": 1008, "end_char_pos": 1047}, {"type": "A", "before": null, "after": "along with potential drawbacks and suggestions to remedy them. This is", "start_char_pos": 1120, "end_char_pos": 1120}], "sents_char_pos": [0, 194, 414, 727, 891, 1009]} {"doc_id": "1802.05405", "revision_depth": "1", "before_revision": "We seek to (i) characterize the learning architectures exploited in biological neural networks for training on very few samples, and (ii) port these algorithmic structures to a machine learning context. The Moth Olfactory Network is among the simplest biological neural systems that can learn, and its architecture includes key structural elements widespread in biological neural nets, such as cascaded networks, competitive inhibition, high intrinsic noise, sparsity, reward mechanisms, and Hebbian plasticity. The interactions of these structural elements play a critical enabling role in rapid learning. We assign a computational model of the Moth Olfactory Network the task of learning to read the MNIST digits. This model, MothNet, is closely aligned with the moth's known biophysics and with in vivo electrode data , including data collected from moths learning new odors. We show that MothNet successfully learns to read given very few training samples (1 to 20 samples per class). In this few-samples regime, it substantially outperforms standard machine learning methods such as nearest-neighbors, support-vector machines, and convolutional neural networks ( CNNs) . The MothNet architecture illustrates how our proposed algorithmic structures , derived from biological brains , can be used to build alternative deep neural nets (DNNs) that may potentially avoid some of DNNs current learning rate limitations . This novel, bio-inspired neural network architecture offers a valuable complementary approach to DNN design .", "after_revision": "We seek to (i) characterize the learning architectures exploited in biological neural networks for training on very few samples, and (ii) port these algorithmic structures to a machine learning context. The Moth Olfactory Network is among the simplest biological neural systems that can learn, and its architecture includes key structural elements and mechanisms widespread in biological neural nets, such as cascaded networks, competitive inhibition, high intrinsic noise, sparsity, reward mechanisms, and Hebbian plasticity. These structural biological elements, in combination, enable rapid learning. MothNet is a computational model of the Moth Olfactory Network , closely aligned with the moth's known biophysics and with in vivo electrode data collected from moths learning new odors. We assign this model the task of learning to read the MNIST digits. We show that MothNet successfully learns to read given very few training samples (1 to 10 samples per class). In this few-samples regime, it outperforms standard machine learning methods such as nearest-neighbors, support-vector machines, and neural networks ( NNs), and matches specialized one-shot transfer-learning methods but without the need for pre-training . The MothNet architecture illustrates how algorithmic structures derived from biological brains can be used to build alternative NNs that may avoid some of the learning rate limitations of current engineered NNs .", "edit_actions": [{"type": "A", "before": null, "after": "and mechanisms", "start_char_pos": 348, "end_char_pos": 348}, {"type": "R", "before": "The interactions of these structural elements play a critical enabling role in", "after": "These structural biological elements, in combination, enable", "start_char_pos": 513, "end_char_pos": 591}, {"type": "R", "before": "We assign", "after": "MothNet is", "start_char_pos": 608, "end_char_pos": 617}, {"type": "R", "before": "the task of learning to read the MNIST digits. This model, MothNet, is", "after": ",", "start_char_pos": 670, "end_char_pos": 740}, {"type": "D", "before": ", including data", "after": null, "start_char_pos": 822, "end_char_pos": 838}, {"type": "A", "before": null, "after": "assign this model the task of learning to read the MNIST digits. We", "start_char_pos": 883, "end_char_pos": 883}, {"type": "R", "before": "20", "after": "10", "start_char_pos": 968, "end_char_pos": 970}, {"type": "D", "before": "substantially", "after": null, "start_char_pos": 1022, "end_char_pos": 1035}, {"type": "D", "before": "convolutional", "after": null, "start_char_pos": 1138, "end_char_pos": 1151}, {"type": "R", "before": "CNNs)", "after": "NNs), and matches specialized one-shot transfer-learning methods but without the need for pre-training", "start_char_pos": 1170, "end_char_pos": 1175}, {"type": "R", "before": "our proposed algorithmic structures ,", "after": "algorithmic structures", "start_char_pos": 1219, "end_char_pos": 1256}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1288, "end_char_pos": 1289}, {"type": "R", "before": "deep neural nets (DNNs) that may potentially", "after": "NNs that may", "start_char_pos": 1323, "end_char_pos": 1367}, {"type": "R", "before": "DNNs current", "after": "the", "start_char_pos": 1382, "end_char_pos": 1394}, {"type": "R", "before": ". This novel, bio-inspired neural network architecture offers a valuable complementary approach to DNN design", "after": "of current engineered NNs", "start_char_pos": 1421, "end_char_pos": 1530}], "sents_char_pos": [0, 202, 512, 607, 716, 879, 990, 1177, 1422]} {"doc_id": "1802.05995", "revision_depth": "1", "before_revision": "Given a set \\mathcal{O of k orientations in the plane, two points inside a simple polygon P \\mathcal{O if there is an O -staircase contained in P that connects them . The O -kernel of P{\\rm -}{\\rm is the subset of points which \\mathcal{O}-see all the other points in P. This work initiates the study of the computation and maintenance of the \\mathcal{O}-{\\rm Kernel } of a polygon P as we rotate the set \\mathcal{O} by an angle \\theta, denoted \\mathcal{O}-{\\rm Kernel }_{\\theta}(P). In particular, we is formed by either one or two orthogonal orientations, \\mathcal{O}=\\{0^\\circ\\} or \\mathcal{O}=\\{0^\\circ,90^\\circ\\}. For these cases and P being a simple polygon, we } design efficient algorithms for (i) computing and maintaining \\{0^{o\\}} -{\\rm Kernel }_{\\theta}(P) while \\theta varies in [-\\frac{\\pi}{2},\\frac{\\pi}{2}), obtaining the angular intervals where the \\{0^{o\\}} -{\\rm Kernel }_{\\theta}(P) is not empty and (ii) for orthogonal polygons P, computing the orientation \\theta\\in 0, \\frac{\\pi{2}) such that the area and/or the perimeter of the \\{0^{o},90^{o}\\}} -{\\rm Kernel }_{\\theta}(P) are maximum or minimum. These results extend previous works by Gewali, Palios, Rawlins, Schuierer, and Wood .", "after_revision": "Let \\mathcal{O of k orientations in the plane, and let P be a simple polygon in the plane. Given two points p,q inside P, we say that p \\mathcal{O sees q if there is an O -staircase contained in P that connects p and q . The O -{\\rm Kernel of the polygon P, denoted by \\mathcal{O-}{\\rm Kernel (P), is the subset of points which \\mathcal{O}-see all the other points in P. This work initiates the study of the computation and maintenance of \\mathcal{O}-{\\rm Kernel } (P) as we rotate the set \\mathcal{O} by an angle \\theta, denoted \\mathcal{O}-{\\rm Kernel }_{\\theta}(P). In particular, we consider the case when the set \\mathcal{O is formed by either one or two orthogonal orientations, \\mathcal{O}=\\{0^\\circ\\} or \\mathcal{O}=\\{0^\\circ,90^\\circ\\}. For these cases and P being a simple polygon, we } design efficient algorithms for computing and maintaining \\}} the \\mathcal{O -{\\rm Kernel }_{\\theta}(P) while \\theta varies in [-\\frac{\\pi}{2},\\frac{\\pi}{2}), obtaining the angular intervals where \\}} : (i) \\mathcal{O -{\\rm Kernel }_{\\theta}(P) is not empty , (ii) {2}) such that the area and/or the perimeter of the \\{0^{o},90^{o}\\}} \\mathcal{O -{\\rm Kernel }_{\\theta}(P) optimizes area or perimeter. Further, we show how the algorithms can be improved when P is a simple orthogonal polygon .", "edit_actions": [{"type": "R", "before": "Given a set \\mathcal{O", "after": "Let \\mathcal{O", "start_char_pos": 0, "end_char_pos": 22}, {"type": "R", "before": "two points inside", "after": "and let P be", "start_char_pos": 55, "end_char_pos": 72}, {"type": "R", "before": "P \\mathcal{O", "after": "in the plane. Given two points p,q inside P, we say that p \\mathcal{O", "start_char_pos": 90, "end_char_pos": 102}, {"type": "A", "before": null, "after": "sees", "start_char_pos": 103, "end_char_pos": 103}, {"type": "A", "before": null, "after": "q", "start_char_pos": 104, "end_char_pos": 104}, {"type": "R", "before": "-staircase", "after": "-", "start_char_pos": 122, "end_char_pos": 132}, {"type": "A", "before": null, "after": "staircase", "start_char_pos": 132, "end_char_pos": 132}, {"type": "R", "before": "them", "after": "p and q", "start_char_pos": 162, "end_char_pos": 166}, {"type": "R", "before": "-kernel of P", "after": "-", "start_char_pos": 175, "end_char_pos": 187}, {"type": "A", "before": null, "after": "Kernel", "start_char_pos": 192, "end_char_pos": 192}, {"type": "A", "before": null, "after": "of the polygon P, denoted by \\mathcal{O", "start_char_pos": 193, "end_char_pos": 193}, {"type": "A", "before": null, "after": "Kernel", "start_char_pos": 200, "end_char_pos": 200}, {"type": "A", "before": null, "after": "(P),", "start_char_pos": 201, "end_char_pos": 201}, {"type": "D", "before": "the", "after": null, "start_char_pos": 343, "end_char_pos": 346}, {"type": "R", "before": "of a polygon P", "after": "(P)", "start_char_pos": 373, "end_char_pos": 387}, {"type": "A", "before": null, "after": "consider the case when the set \\mathcal{O", "start_char_pos": 506, "end_char_pos": 506}, {"type": "D", "before": "(i)", "after": null, "start_char_pos": 707, "end_char_pos": 710}, {"type": "D", "before": "\\{0^{o", "after": null, "start_char_pos": 737, "end_char_pos": 743}, {"type": "A", "before": null, "after": "the \\mathcal{O", "start_char_pos": 747, "end_char_pos": 747}, {"type": "D", "before": "the \\{0^{o", "after": null, "start_char_pos": 868, "end_char_pos": 878}, {"type": "A", "before": null, "after": ": (i) \\mathcal{O", "start_char_pos": 882, "end_char_pos": 882}, {"type": "R", "before": "and", "after": ",", "start_char_pos": 923, "end_char_pos": 926}, {"type": "D", "before": "for orthogonal polygons P, computing the orientation \\theta\\in", "after": null, "start_char_pos": 932, "end_char_pos": 994}, {"type": "D", "before": "0, \\frac{\\pi", "after": null, "start_char_pos": 995, "end_char_pos": 1007}, {"type": "A", "before": null, "after": "\\mathcal{O", "start_char_pos": 1077, "end_char_pos": 1077}, {"type": "R", "before": "are maximum or minimum. These results extend previous works by Gewali, Palios, Rawlins, Schuierer, and Wood", "after": "optimizes area or perimeter. Further, we show how the algorithms can be improved when P is a simple orthogonal polygon", "start_char_pos": 1105, "end_char_pos": 1212}], "sents_char_pos": [0, 168, 274, 487, 623, 749, 1128]} {"doc_id": "1802.06665", "revision_depth": "1", "before_revision": "We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature . By considering a \"maximum likelihood\" criterion function, our estimator becomes the K-ML estimator in Aguirregabiria and Mira ( 2007) . By considering a \"minimum distance\" criterion function, it defines a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) . First, we establish that the K-ML estimator is consistent and asymptotically normal for any K. This complements findings in Aguirregabiria and Mira (2007) . Furthermore, we show that the asymptotic variance of the K-ML estimator can exhibit arbitrary patterns as a function K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K . In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all K .", "after_revision": "We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature such as those in Aguirregabiria and Mira ( 2002, 2007) , Pesendorfer and Schmidt-Dengler (2008) , and Pakes et al. (2007). First, we establish that the K-PML estimator is consistent and asymptotically normal for all K. This complements findings in Aguirregabiria and Mira (2007) , who focus on K=1 and K large enough to induce convergence of the estimator . Furthermore, we show under certain conditions that the asymptotic variance of the K-PML estimator can exhibit arbitrary patterns as a function of K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for all K. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-PML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-PML estimators. Our main result implies two new corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators . In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-PML estimator for all K . Finally, the appendix provides appropriate conditions under which the optimal 1-MD estimator is asymptotically efficient .", "edit_actions": [{"type": "R", "before": ". By considering a \"maximum likelihood\" criterion function, our estimator becomes the K-ML estimator", "after": "such as those", "start_char_pos": 316, "end_char_pos": 416}, {"type": "A", "before": null, "after": "2002,", "start_char_pos": 446, "end_char_pos": 446}, {"type": "R", "before": ". By considering a \"minimum distance\" criterion function, it defines a new K-MD estimator, which is an iterative version of the estimators in", "after": ",", "start_char_pos": 453, "end_char_pos": 594}, {"type": "R", "before": ".", "after": ", and Pakes et al. (2007).", "start_char_pos": 634, "end_char_pos": 635}, {"type": "R", "before": "K-ML", "after": "K-PML", "start_char_pos": 665, "end_char_pos": 669}, {"type": "R", "before": "any", "after": "all", "start_char_pos": 724, "end_char_pos": 727}, {"type": "A", "before": null, "after": ", who focus on K=1 and K large enough to induce convergence of the estimator", "start_char_pos": 791, "end_char_pos": 791}, {"type": "A", "before": null, "after": "under certain conditions", "start_char_pos": 815, "end_char_pos": 815}, {"type": "R", "before": "K-ML", "after": "K-PML", "start_char_pos": 852, "end_char_pos": 856}, {"type": "A", "before": null, "after": "of", "start_char_pos": 912, "end_char_pos": 912}, {"type": "R", "before": "any", "after": "all", "start_char_pos": 1005, "end_char_pos": 1008}, {"type": "R", "before": "K-ML", "after": "K-PML", "start_char_pos": 1105, "end_char_pos": 1109}, {"type": "R", "before": "This new", "after": "The invariance", "start_char_pos": 1316, "end_char_pos": 1324}, {"type": "R", "before": "K-ML", "after": "K-PML", "start_char_pos": 1414, "end_char_pos": 1418}, {"type": "D", "before": "and important", "after": null, "start_char_pos": 1463, "end_char_pos": 1476}, {"type": "D", "before": "for all K", "after": null, "start_char_pos": 1652, "end_char_pos": 1661}, {"type": "R", "before": "K-ML", "after": "K-PML", "start_char_pos": 1880, "end_char_pos": 1884}, {"type": "A", "before": null, "after": ". Finally, the appendix provides appropriate conditions under which the optimal 1-MD estimator is asymptotically efficient", "start_char_pos": 1905, "end_char_pos": 1905}], "sents_char_pos": [0, 122, 252, 317, 454, 635, 730, 793, 915, 1011, 1120, 1315, 1430, 1574, 1791]} {"doc_id": "1802.07569", "revision_depth": "1", "before_revision": "Humans and animals have the ability to continually acquire and fine-tune knowledge throughout their lifespan. This ability is mediated by a rich set of neurocognitive functions that together contribute to the early development and experience-driven specialization of our sensorimotor skills . Consequently, the ability to learn from continuous streams of information is crucial for computational learning systems and autonomous agents (inter)acting in the real world . However, continual lifelong learning remains a long-standing challenge for machine learning and neural network models since the incremental acquisition of new skills from non-stationary data distributions generally leads to catastrophic URLetting or interference. This limitation represents a major drawback also for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which the number of tasks is not known a priori and the information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to continual lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic interference . Although significant advances have been made in domain-specific continual lifelong learning with neural networks, extensive research efforts are required for the development of general-purpose artificial intelligence and autonomous agents . We discuss well-established research and recent methodological trends motivated by experimentally observed lifelong learning factors in biological systems . Such factors include principles of neurosynaptic stability-plasticity, critical developmental stages , intrinsically motivated exploration, transfer learning, and crossmodal integration .", "after_revision": "Humans and animals have the ability to continually acquire and fine-tune knowledge throughout their lifespan. This ability , referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to the long-term memory consolidation and retrieval without catastrophic URLetting . Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information . However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic URLetting or interference. This limitation represents a major drawback also for state-of-the-art deep and shallow neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which the number of tasks is not known a priori and the information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic URLetting . Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots . We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, multi-task transfer learning , intrinsically motivated exploration, and crossmodal learning .", "edit_actions": [{"type": "A", "before": null, "after": ", referred to as lifelong learning,", "start_char_pos": 123, "end_char_pos": 123}, {"type": "R", "before": "functions", "after": "mechanisms", "start_char_pos": 168, "end_char_pos": 177}, {"type": "R", "before": "early development and experience-driven", "after": "development and", "start_char_pos": 210, "end_char_pos": 249}, {"type": "A", "before": null, "after": "as well as to the long-term memory consolidation and retrieval without catastrophic URLetting", "start_char_pos": 292, "end_char_pos": 292}, {"type": "R", "before": "the ability to learn from continuous streams of information is", "after": "lifelong learning capabilities are", "start_char_pos": 309, "end_char_pos": 371}, {"type": "R", "before": "(inter)acting", "after": "interacting", "start_char_pos": 437, "end_char_pos": 450}, {"type": "A", "before": null, "after": "and processing continuous streams of information", "start_char_pos": 469, "end_char_pos": 469}, {"type": "D", "before": "continual", "after": null, "start_char_pos": 481, "end_char_pos": 490}, {"type": "R", "before": "incremental acquisition of new skills", "after": "continual acquisition of incrementally available information", "start_char_pos": 600, "end_char_pos": 637}, {"type": "A", "before": null, "after": "and shallow", "start_char_pos": 811, "end_char_pos": 811}, {"type": "D", "before": "continual", "after": null, "start_char_pos": 1136, "end_char_pos": 1145}, {"type": "R", "before": "interference", "after": "URLetting", "start_char_pos": 1294, "end_char_pos": 1306}, {"type": "D", "before": "continual lifelong", "after": null, "start_char_pos": 1373, "end_char_pos": 1391}, {"type": "R", "before": "general-purpose artificial intelligence and autonomous agents", "after": "robust lifelong learning on autonomous agents and robots", "start_char_pos": 1486, "end_char_pos": 1547}, {"type": "R", "before": "research and recent methodological trends motivated by experimentally observed", "after": "and emerging research motivated by", "start_char_pos": 1578, "end_char_pos": 1656}, {"type": "R", "before": ". Such factors include principles of neurosynaptic stability-plasticity, critical developmental stages", "after": "such as neurosynaptic plasticity, multi-task transfer learning", "start_char_pos": 1705, "end_char_pos": 1807}, {"type": "R", "before": "transfer learning, and crossmodal integration", "after": "and crossmodal learning", "start_char_pos": 1847, "end_char_pos": 1892}], "sents_char_pos": [0, 109, 471, 735, 1065, 1308, 1549, 1706]} {"doc_id": "1802.08254", "revision_depth": "1", "before_revision": "As architecture, system, data management, and machine learning communities pay greater attention to innovative big data and data-driven artificial intelligence (in short, AI ) algorithms, architecture, and systems , the pressure of benchmarking rises. However , complexity, diversity, frequently changed workloads, and rapid evolution of big data , especially AI systems raise great challenges in benchmarking . First, for the sake of conciseness, benchmarking scalability, portability cost, reproducibility, and better interpretation of performance data, we need understand what are the abstractions of frequently-appearing units of computation , which we call dwarfs, among big data and AI workloads. Second, for the sake of fairness, the benchmarks must include diversity of data and workloads. Third, for co-design of software and hardware, the benchmarks should be consistent across different communities. Other than creating a new benchmark or proxy for every possible workload, we propose using dwarf-based benchmarks--the combination of eight dwarfs--to represent diversity of big data and AI workloads. The current version--BigDataBench 4.0 provides 13 representative real-world data sets and 47 big data and AI benchmarks, including seven workload types: online service, offline analytics, graph analytics, AI, data warehouse, NoSQL, and streaming. BigDataBench 4.0 is publicly available from URL Also, for the first time, we comprehensively characterize the benchmarks of seven workload types in BigDataBench 4.0 in addition to traditional benchmarks like SPECCPU, PARSEC and HPCC in a hierarchical manner and drill down on five levels, using the Top-Down analysis from an architecture perspective .", "after_revision": "Several fundamental changes in technology indicate domain-specific hardware and software co-design is the only path left. In this context, architecture, system, data management, and machine learning communities pay greater attention to innovative big data and AI algorithms, architecture, and systems . Unfortunately , complexity, diversity, frequently-changed workloads, and rapid evolution of big data and AI systems raise great challenges . First, the traditional benchmarking methodology that creates a new benchmark or proxy for every possible workload is not scalable, or even impossible for Big Data and AI benchmarking. Second, it is prohibitively expensive to tailor the architecture to characteristics of one or more application or even a domain of applications. We consider each big data and AI workload as a pipeline of one or more classes of units of computation performed on different initial or intermediate data inputs, each class of which we call a data motif. On the basis of our previous work that identifies eight data motifs taking up most of the run time of a wide variety of big data and AI workloads, we propose a scalable benchmarking methodology that uses the combination of one or more data motifs---to represent diversity of big data and AI workloads. Following this methodology, we present a unified big data and AI benchmark suite---BigDataBench 4.0 , publicly available URL This unified benchmark suite sheds new light on domain-specific hardware and software co-design: tailoring the system and architecture to characteristics of the unified eight data motifs other than one or more application case by case. Also, for the first time, we comprehensively characterize the CPU pipeline efficiency using the benchmarks of seven workload types in BigDataBench 4.0 .", "edit_actions": [{"type": "R", "before": "As", "after": "Several fundamental changes in technology indicate domain-specific hardware and software co-design is the only path left. In this context,", "start_char_pos": 0, "end_char_pos": 2}, {"type": "R", "before": "data-driven artificial intelligence (in short, AI )", "after": "AI", "start_char_pos": 124, "end_char_pos": 175}, {"type": "R", "before": ", the pressure of benchmarking rises. However", "after": ". Unfortunately", "start_char_pos": 214, "end_char_pos": 259}, {"type": "R", "before": "frequently changed", "after": "frequently-changed", "start_char_pos": 285, "end_char_pos": 303}, {"type": "R", "before": ", especially", "after": "and", "start_char_pos": 347, "end_char_pos": 359}, {"type": "D", "before": "in benchmarking", "after": null, "start_char_pos": 394, "end_char_pos": 409}, {"type": "R", "before": "for the sake of conciseness, benchmarking scalability, portability cost, reproducibility, and better interpretation of performance data, we need understand what are the abstractions of frequently-appearing", "after": "the traditional benchmarking methodology that creates a new benchmark or proxy for every possible workload is not scalable, or even impossible for Big Data and AI benchmarking. Second, it is prohibitively expensive to tailor the architecture to characteristics of one or more application or even a domain of applications. We consider each big data and AI workload as a pipeline of one or more classes of", "start_char_pos": 419, "end_char_pos": 624}, {"type": "R", "before": ",", "after": "performed on different initial or intermediate data inputs, each class of", "start_char_pos": 646, "end_char_pos": 647}, {"type": "R", "before": "dwarfs, among big data and AI workloads. Second, for the sake of fairness, the benchmarks must include diversity of data and workloads. Third, for co-design of software and hardware, the benchmarks should be consistent across different communities. Other than creating a new benchmark or proxy for every possible workload, we propose using dwarf-based benchmarks--the combination of eight dwarfs--to", "after": "a data motif. On the basis of our previous work that identifies eight data motifs taking up most of the run time of a wide variety of big data and AI workloads, we propose a scalable benchmarking methodology that uses the combination of one or more data motifs---to", "start_char_pos": 662, "end_char_pos": 1061}, {"type": "R", "before": "The current version--BigDataBench 4.0 provides 13 representative real-world data sets and 47", "after": "Following this methodology, we present a unified", "start_char_pos": 1112, "end_char_pos": 1204}, {"type": "R", "before": "benchmarks, including seven workload types: online service, offline analytics, graph analytics, AI, data warehouse, NoSQL, and streaming. BigDataBench", "after": "benchmark suite---BigDataBench", "start_char_pos": 1221, "end_char_pos": 1371}, {"type": "R", "before": "is publicly available from URL", "after": ", publicly available URL This unified benchmark suite sheds new light on domain-specific hardware and software co-design: tailoring the system and architecture to characteristics of the unified eight data motifs other than one or more application case by case.", "start_char_pos": 1376, "end_char_pos": 1406}, {"type": "A", "before": null, "after": "CPU pipeline efficiency using the", "start_char_pos": 1469, "end_char_pos": 1469}, {"type": "D", "before": "in addition to traditional benchmarks like SPECCPU, PARSEC and HPCC in a hierarchical manner and drill down on five levels, using the Top-Down analysis from an architecture perspective", "after": null, "start_char_pos": 1525, "end_char_pos": 1709}], "sents_char_pos": [0, 251, 411, 702, 797, 910, 1111, 1358]} {"doc_id": "1803.03571", "revision_depth": "1", "before_revision": "From a public-health perspective, the occurrence of drug-drug-interactions (DDI) from multiple drug prescriptions is a serious problem, especially in the elderly population. This is true both for individuals and the system itself since patients with complications due to DDI will likely re-enter the system at a costlier level. We conducted an 18-month study of DDI occurrence in Blumenau (Brazil ; pop. 340,000) using city-wide drug dispensing data from both primary and secondary-care level. Our goal is also to identify possible risk factors in a large population, ultimately characterizing the burden of DDI for patients, doctors and the public system itself . We found 181 distinct DDI being prescribed concomitantly to almost 5 \\% of the city population. We also discovered that women are at a 60\\% risk increase of DDI when compared to men , while only having a 6\\% co-administration risk increase. Analysis of the DDI co-occurrence network reveals which DDI pairs are most associated with the observed greater DDI risk for females, demonstrating that contraception and hormone therapy are not the main culprits of the gender disparity, which is maximized after the reproductive years . Furthermore, DDI risk increases dramatically with age, with patients age 70-79 having a 50-fold risk increase in comparison to patients aged 0-19 . Interestingly, several null models demonstrate that this risk increase is not due to increased polypharmacy with age. Finally, we demonstrate that while the number of drugs and co-administrations help predict a patient's number of DDI (R^2=.413), they are not sufficient to flag these patients accurately, which we achieve by training classifiers with additional data (MCC= .83,F1 = .72 ). These results demonstrate that accurate warning systems for known DDI can be devised for public and private systems alike, resulting in substantial prevention of DDI-related ADR and savings.", "after_revision": "The occurrence of drug-drug-interactions (DDI) from multiple drug prescriptions is a serious problem, both for individuals and health-care systems, since patients with complications due to DDI are likely to re-enter the system at a costlier level. We present a large-scale longitudinal study of the DDI phenomenon at the primary- and secondary-care level using electronic health records from the city of Blumenau in Southern Brazil (pop. ~ 340,000) . This is the first study of DDI we are aware of that follows an entire city longitudinally for 18 months. We found that 181 distinct drug pairs known to interact were dispensed concomitantly to 12 \\% of the patients in the city's public health-care system. Further, 4\\% of the patients were dispensed major DDI combinations, likely to result in very serious adverse reactions and costs we estimate to be larger than previously reported. DDI results are integrated into associative networks for inference and visualization, revealing key medications and interactions. Analysis reveals that women have a 60\\% increased risk of DDI as compared to men ; the increase becomes 90\\% when only major DDI are considered . Furthermore, DDI risk increases substantially with age. Patients aged 70-79 years have a 34\\% risk of DDI when they are prescribed two or more drugs concomitantly . Interestingly, a null model demonstrates that age and women-specific risks from increased polypharmacy far exceed expectations in those populations. This suggests that social and biological factors are at play. Finally, we demonstrate that machine learning classifiers accurately predict patients likely to be administered DDI given their history of prescribed drugs, gender, and age (MCC= .7,AUC = .97 ). These results demonstrate that accurate warning systems for known DDI can be devised for health-care systems leading to substantial reduction of DDI-related adverse reactions and health-care savings.", "edit_actions": [{"type": "R", "before": "From a public-health perspective, the", "after": "The", "start_char_pos": 0, "end_char_pos": 37}, {"type": "D", "before": "especially in the elderly population. This is true", "after": null, "start_char_pos": 136, "end_char_pos": 186}, {"type": "R", "before": "the system itself", "after": "health-care systems,", "start_char_pos": 212, "end_char_pos": 229}, {"type": "R", "before": "will likely", "after": "are likely to", "start_char_pos": 275, "end_char_pos": 286}, {"type": "R", "before": "conducted an 18-month study of DDI occurrence in Blumenau (Brazil ; pop.", "after": "present a large-scale longitudinal study of the DDI phenomenon at the primary- and secondary-care level using electronic health records from the city of Blumenau in Southern Brazil (pop. ~", "start_char_pos": 331, "end_char_pos": 403}, {"type": "D", "before": "using city-wide drug dispensing data from both primary and secondary-care level. Our goal is also to identify possible risk factors in a large population, ultimately characterizing the burden of DDI for patients, doctors and the public system itself", "after": null, "start_char_pos": 413, "end_char_pos": 662}, {"type": "A", "before": null, "after": "This is the first study of DDI we are aware of that follows an entire city longitudinally for 18 months.", "start_char_pos": 665, "end_char_pos": 665}, {"type": "A", "before": null, "after": "that", "start_char_pos": 675, "end_char_pos": 675}, {"type": "R", "before": "DDI being prescribed concomitantly to almost 5", "after": "drug pairs known to interact were dispensed concomitantly to 12", "start_char_pos": 689, "end_char_pos": 735}, {"type": "R", "before": "city population. We also discovered that women are at", "after": "patients in the city's public health-care system. Further, 4\\% of the patients were dispensed major DDI combinations, likely to result in very serious adverse reactions and costs we estimate to be larger than previously reported. DDI results are integrated into associative networks for inference and visualization, revealing key medications and interactions. Analysis reveals that women have", "start_char_pos": 746, "end_char_pos": 799}, {"type": "R", "before": "risk increase of DDI when", "after": "increased risk of DDI as", "start_char_pos": 807, "end_char_pos": 832}, {"type": "R", "before": ", while only having a 6\\% co-administration risk increase. Analysis of the DDI co-occurrence network reveals which DDI pairs are most associated with the observed greater DDI risk for females, demonstrating that contraception and hormone therapy are not the main culprits of the gender disparity, which is maximized after the reproductive years", "after": "; the increase becomes 90\\% when only major DDI are considered", "start_char_pos": 849, "end_char_pos": 1193}, {"type": "R", "before": "dramatically with age, with patients age", "after": "substantially with age. Patients aged", "start_char_pos": 1228, "end_char_pos": 1268}, {"type": "R", "before": "having a 50-fold risk increase in comparison to patients aged 0-19", "after": "years have a 34\\% risk of DDI when they are prescribed two or more drugs concomitantly", "start_char_pos": 1275, "end_char_pos": 1341}, {"type": "R", "before": "several null models demonstrate that this risk increase is not due to increased polypharmacy with age.", "after": "a null model demonstrates that age and women-specific risks from increased polypharmacy far exceed expectations in those populations. This suggests that social and biological factors are at play.", "start_char_pos": 1359, "end_char_pos": 1461}, {"type": "R", "before": "while the number of drugs and co-administrations help predict a patient's number of DDI (R^2=.413), they are not sufficient to flag these patients accurately, which we achieve by training classifiers with additional data", "after": "machine learning classifiers accurately predict patients likely to be administered DDI given their history of prescribed drugs, gender, and age", "start_char_pos": 1491, "end_char_pos": 1711}, {"type": "R", "before": ".83,F1", "after": ".7,AUC", "start_char_pos": 1718, "end_char_pos": 1724}, {"type": "R", "before": ".72", "after": ".97", "start_char_pos": 1727, "end_char_pos": 1730}, {"type": "R", "before": "public and private systems alike, resulting in substantial prevention", "after": "health-care systems leading to substantial reduction", "start_char_pos": 1823, "end_char_pos": 1892}, {"type": "R", "before": "ADR and", "after": "adverse reactions and health-care", "start_char_pos": 1908, "end_char_pos": 1915}], "sents_char_pos": [0, 173, 327, 398, 493, 664, 762, 907, 1195, 1461, 1733]} {"doc_id": "1803.08341", "revision_depth": "1", "before_revision": "Fast constant factor approximation algorithms are devised for a ] problem of intersecting a set of straight line segments with the smallest cardinality set of disks of fixed radii r>0, where the set of segments forms a straight line drawing G=(V,E) of a planar graph without edge crossings. Exploiting its tough connection with the geometric Hitting Set problem we give \\left(50+52\\frac{12{13}}+\\varepsilon\\right)-approximate O ( \\left(\\left( |E| ^4\\log{\\varepsilon^2}+\\log|E|{\\varepsilon^3}}\\right) |E| )-time and O(|E| ^2\\log|E| ) spa\\-ce algorithm \\right)\\left({\\varepsilon}}\\right) based on the modified Agarwal-Pan algorithm. More accurate (34+24\\sqrt{2}+\\varepsilon)-, \\left(12+6\\sqrt{3}+\\varepsilon\\right)- and \\left(34+38\\sqrt{\\frac{15}{19}}+\\varepsilon\\right) -ap\\-pro\\-xi\\-mate algorithms are proposed for the case where G is any subgraph of either an outerplane or a Gabriel graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where \\varepsilon>0 is an arbitrary small constant .", "after_revision": "Fast constant factor approximation algorithms are devised for an NP- and W 1]-hard problem of intersecting a set of straight line segments with the smallest cardinality set of disks of fixed radii r>0, where the set of segments forms a straight line drawing G=(V,E) of a planar graph without edge crossings. Exploiting tough connection of the problem with the geometric Hitting Set problem , an \\left(50+52\\frac{12{13}}+\\varepsilon\\right)-approximate O \\left(\\left( |E| ^2+\\frac{|E|\\log|E|{\\varepsilon^2}+\\log|E|{\\varepsilon^3}}\\right) |E| ^2\\log|E| \\right)-time and O\\left(\\frac{|E|^2\\log|E|{\\varepsilon}}\\right)-space algorithm is given based on the modified Agarwal-Pan algorithm. More accurate (34+24\\sqrt{2}+\\varepsilon)-, \\left(12+6\\sqrt{3}+\\varepsilon\\right)- and \\left(34+38\\sqrt{\\frac{15}{19}}+\\varepsilon\\right) -approximate algorithms are also proposed for the case where G is any subgraph of either an outerplane or a Gabriel graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where \\varepsilon>0 is an arbitrary small constant . Related work only tackles the case where E consists of axis-parallel segments, resulting in an O(|E|\\log|E|)-time and O(|E|\\log|E|)-space 8-approximation .", "edit_actions": [{"type": "R", "before": "a", "after": "an NP- and W", "start_char_pos": 62, "end_char_pos": 63}, {"type": "A", "before": null, "after": "1", "start_char_pos": 64, "end_char_pos": 64}, {"type": "A", "before": null, "after": "-hard", "start_char_pos": 65, "end_char_pos": 65}, {"type": "R", "before": "its tough connection", "after": "tough connection of the problem", "start_char_pos": 302, "end_char_pos": 322}, {"type": "R", "before": "we give", "after": ", an", "start_char_pos": 362, "end_char_pos": 369}, {"type": "D", "before": "(", "after": null, "start_char_pos": 428, "end_char_pos": 429}, {"type": "R", "before": "^4\\log", "after": "^2+\\frac{|E|\\log|E|", "start_char_pos": 447, "end_char_pos": 453}, {"type": "D", "before": ")-time and O(|E|", "after": null, "start_char_pos": 504, "end_char_pos": 520}, {"type": "D", "before": ") spa\\-ce algorithm", "after": null, "start_char_pos": 531, "end_char_pos": 550}, {"type": "A", "before": null, "after": "-time and O", "start_char_pos": 558, "end_char_pos": 558}, {"type": "A", "before": null, "after": "\\frac{|E|^2\\log|E|", "start_char_pos": 564, "end_char_pos": 564}, {"type": "A", "before": null, "after": "-space algorithm is given", "start_char_pos": 585, "end_char_pos": 585}, {"type": "R", "before": "-ap\\-pro\\-xi\\-mate algorithms are", "after": "-approximate algorithms are also", "start_char_pos": 769, "end_char_pos": 802}, {"type": "A", "before": null, "after": ". Related work only tackles the case where E consists of axis-parallel segments, resulting in an O(|E|\\log|E|)-time and O(|E|\\log|E|)-space 8-approximation", "start_char_pos": 1046, "end_char_pos": 1046}], "sents_char_pos": [0, 290, 630]} {"doc_id": "1803.08341", "revision_depth": "2", "before_revision": "Fast constant factor approximation algorithms are devised for an NP- and W[1]-hard problem of intersecting a set of straight line segments with the smallest cardinality set of disks of fixed radii r>0, where the set of segments forms a straight line drawing G=(V,E) of a planar graph without edge crossings. Exploiting tough connection of the problem with the geometric Hitting Set problem, an \\left(50+52\\frac{12{13}}+\\varepsilon\\right)-approximate O\\left( %DIFDELCMD < \\left(%%% |E|^2+\\frac{|E|\\log|E|{\\varepsilon^2}+\\log|E|{\\varepsilon^3}} \\right) |E|^2\\log |E|%DIFDELCMD < \\right)%%% -time and O\\left( \\frac{|E|^2\\log|E|{\\varepsilon}} \\right)-space algorithm is given based on the modified Agarwal-Pan algorithm. More accurate (34+242+\\varepsilon)- ,%DIFDELCMD < \\left(%%% 12+6\\sqrt{3+\\varepsilon}%DIFDELCMD < \\right)%%% - and \\left(34+ 38\\sqrt{\\frac{15 +\\varepsilon\\right) -approximate algorithms are also proposed for the case where G is any subgraph of either an outerplane or a Gabriel graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where \\varepsilon>0 is an arbitrary small constant. Related work only tackles the case where E consists of axis-parallel segments, resulting in an O( |E|\\log |E| )-time and O( |E|\\log |E| )-space 8-approximation.", "after_revision": "Fast constant factor approximation algorithms are devised for an NP- and W[1]-hard problem of intersecting a set of n straight line segments with the smallest cardinality set of disks of fixed radii r>0, where the set of segments forms a straight line drawing G=(V,E) of a planar graph without edge crossings. Exploiting tough connection of the problem with the geometric Hitting Set problem, an \\left(50+52\\frac{12{13}}+\\varepsilon\\right)-approximate O\\left( %DIFDELCMD < \\left(%%% {\\varepsilon^2}+\\log|E|{\\varepsilon^3}} n^4\\log n \\right) %DIFDELCMD < \\right)%%% -time and O\\left( {\\varepsilon}} n^2\\log n \\right)-space algorithm is given based on the modified Agarwal-Pan algorithm. More accurate (34+242+\\varepsilon)- %DIFDELCMD < \\left(%%% +\\varepsilon}%DIFDELCMD < \\right)%%% and \\left(34+ 44\\sqrt{\\frac{6 +\\varepsilon\\right) -approxi\\-mate algorithms are also proposed for cases where G is any subgraph of either an outerplane graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where \\varepsilon>0 is an arbitrary small constant. Moreover, an O(n^2\\log n)-time and O(n^2)-space 18-approximation is designed for the case where G is any subgraph of a Gabriel graph. To the best of our knowledge, related work only tackles the case where E consists of axis-parallel segments, resulting in an O( n\\log n )-time and O( n\\log n )-space 8-approximation.", "edit_actions": [{"type": "A", "before": null, "after": "n", "start_char_pos": 116, "end_char_pos": 116}, {"type": "D", "before": "|E|^2+\\frac{|E|\\log|E|", "after": null, "start_char_pos": 482, "end_char_pos": 504}, {"type": "A", "before": null, "after": "n^4\\log n", "start_char_pos": 544, "end_char_pos": 544}, {"type": "D", "before": "|E|^2\\log |E|", "after": null, "start_char_pos": 553, "end_char_pos": 566}, {"type": "D", "before": "\\frac{|E|^2\\log|E|", "after": null, "start_char_pos": 608, "end_char_pos": 626}, {"type": "A", "before": null, "after": "n^2\\log n", "start_char_pos": 641, "end_char_pos": 641}, {"type": "D", "before": ",", "after": null, "start_char_pos": 756, "end_char_pos": 757}, {"type": "D", "before": "12+6\\sqrt{3", "after": null, "start_char_pos": 780, "end_char_pos": 791}, {"type": "D", "before": "-", "after": null, "start_char_pos": 828, "end_char_pos": 829}, {"type": "R", "before": "38\\sqrt{\\frac{15", "after": "44\\sqrt{\\frac{6", "start_char_pos": 844, "end_char_pos": 860}, {"type": "R", "before": "-approximate", "after": "-approxi\\-mate", "start_char_pos": 881, "end_char_pos": 893}, {"type": "R", "before": "the case", "after": "cases", "start_char_pos": 927, "end_char_pos": 935}, {"type": "D", "before": "or a Gabriel", "after": null, "start_char_pos": 984, "end_char_pos": 996}, {"type": "R", "before": "Related", "after": "Moreover, an O(n^2\\log n)-time and O(n^2)-space 18-approximation is designed for the case where G is any subgraph of a Gabriel graph. To the best of our knowledge, related", "start_char_pos": 1158, "end_char_pos": 1165}, {"type": "R", "before": "|E|\\log |E|", "after": "n\\log n", "start_char_pos": 1256, "end_char_pos": 1267}, {"type": "R", "before": "|E|\\log |E|", "after": "n\\log n", "start_char_pos": 1282, "end_char_pos": 1293}], "sents_char_pos": [0, 308, 719, 1157]} {"doc_id": "1803.10128", "revision_depth": "1", "before_revision": "This paper considers an initial market model, specified by its underlying assets S and its flow of information \\mathbb F, and an arbitrary random time \\tau which is not an \\mathbb F-stopping time. In this setting, our principal goals reside in describing as explicit as possible the set of all deflators (i.e. the dual set of all wealth processes) and the log-optimal portfolio for the stopped model S^{\\tau}. At the practical level, this random time might represent the default time of a firm, the death time of an insured, or more generally an occurrence time of an event that might impact the market somehow. Since the death time and the default time can be seen when they occur only, the progressive enlargement of \\mathbb F with \\tau , that we denote by \\mathbb G, sounds taylor-fit for modelling the new flow of information that incorporates both \\mathbb F and \\tau. Thanks to the deep results of Choulli et al (2016) on the ] martingales classification and representation for progressive enlarged filtration, we completely and explicitly describe all the deflators for (S ^{\\tau,\\mathbb G) in terms of the deflators of the initial market model (S} ,\\mathbb F) . This constitute our first principal (and probably the principal) innovative contribution of this paper. Our second principal contribution lies in describing and measuring the impact of \\tau on log-related optimal portfolios, namely the num\\'eraire portfolio and the portfolio solution to the log-utility maximization problem .", "after_revision": "This paper considers an initial market model, specified by its underlying assets S and its flow of information \\mathbb F, and an arbitrary random time \\tau which might not be an \\mathbb F-stopping time. In this setting, our principal goal resides in describing as explicit as possible the set of all deflators , which constitutes the dual set of all \"admissible\" wealth processes, for the stopped model S^{\\tau}. Since the death time and the default time (that \\tau might represent) can be seen when they occur only, the progressive enlargement of \\mathbb F with \\tau sounds tailor-fit for modelling the new flow of information that incorporates both \\mathbb F and \\tau. Thanks to the deep results of Choulli et al . 8], on martingales classification and representation for progressive enlarged filtration, our aim is fully achieved for both cases of local martingale deflators and general supermartingale delators. The results are illustrated on several particular models for (\\tau,S,\\mathbb F) such as the discrete-time and the jump-diffusion settings for (S ,\\mathbb G) in terms of the deflators of the initial market model (S} ,\\mathbb F) , and the case when \\tau avoids \\mathbb F-stopping times .", "edit_actions": [{"type": "R", "before": "is not", "after": "might not be", "start_char_pos": 162, "end_char_pos": 168}, {"type": "R", "before": "goals reside", "after": "goal resides", "start_char_pos": 228, "end_char_pos": 240}, {"type": "R", "before": "(i.e.", "after": ", which constitutes", "start_char_pos": 304, "end_char_pos": 309}, {"type": "R", "before": "wealth processes) and the log-optimal portfolio", "after": "\"admissible\" wealth processes,", "start_char_pos": 330, "end_char_pos": 377}, {"type": "D", "before": "At the practical level, this random time might represent the default time of a firm, the death time of an insured, or more generally an occurrence time of an event that might impact the market somehow.", "after": null, "start_char_pos": 410, "end_char_pos": 611}, {"type": "A", "before": null, "after": "(that \\tau might represent)", "start_char_pos": 654, "end_char_pos": 654}, {"type": "R", "before": ", that we denote by \\mathbb G, sounds taylor-fit", "after": "sounds tailor-fit", "start_char_pos": 740, "end_char_pos": 788}, {"type": "R", "before": "(2016) on the", "after": ".", "start_char_pos": 918, "end_char_pos": 931}, {"type": "A", "before": null, "after": "8", "start_char_pos": 932, "end_char_pos": 932}, {"type": "A", "before": null, "after": ", on", "start_char_pos": 933, "end_char_pos": 933}, {"type": "R", "before": "we completely and explicitly describe all the deflators for", "after": "our aim is fully achieved for both cases of local martingale deflators and general supermartingale delators. The results are illustrated on several particular models for (\\tau,S,\\mathbb F) such as the discrete-time and the jump-diffusion settings for", "start_char_pos": 1017, "end_char_pos": 1076}, {"type": "D", "before": "^{\\tau", "after": null, "start_char_pos": 1080, "end_char_pos": 1086}, {"type": "R", "before": ". This constitute our first principal (and probably the principal) innovative contribution of this paper. Our second principal contribution lies in describing and measuring the impact of \\tau on log-related optimal portfolios, namely the num\\'eraire portfolio and the portfolio solution to the log-utility maximization problem", "after": ", and the case when \\tau avoids \\mathbb F-stopping times", "start_char_pos": 1168, "end_char_pos": 1494}], "sents_char_pos": [0, 196, 409, 611, 873, 1169, 1273]} {"doc_id": "1804.00049", "revision_depth": "1", "before_revision": "Several neuroimaging markers have been established for the early diagnosis of Alzheimer's disease, among them amyloid-beta deposition, glucose metabolism, and gray matter volume. Up to now, these imaging modalitieswere mostly analyzed separately from each other, and little is known about the regional interrelation and dependency of these markers. Gaussian graphical models (GGMs) are able to estimate the conditional dependency between many individual random variables. We applied GGMs for studying the inter-regional associations and dependencies between multimodal imaging markers in prodromal Alzheimer's disease . Data from N=667 subjects with mild cognitive impairment, dementia, and cognitively healthy controls were obtained from the ADNI . Mean amyloid load , glucose metabolism , and gray matter volume was calculated for each brain region. GGMs were estimated using a Bayesian framework and for each individual diagnosis, graph-theoretical statistics were calculated to determine structural changes associated with disease severity. Highly inter-correlated regions, e.g. adjacent regions in the same lobes, formed distinct clusters but included only regions within the same imaging modality. Hardly any associations were found between different modalities, indicating almost no conditional dependency of brain regions across modalities when considering the covariance explained by all other regions. Network measures clustering coefficient and path length were significantly altered across diagnostic groups, with a biphasic u-shape trajectory . GGMs showed almost no conditional dependencies between modalitieswhen at the same time considering various other regions within the same modalities. However, this approach could be used as a clustering method to derive graph statistics in future studies omitting the need to binarize the network as currently being done for connections based on Pearson correlation .", "after_revision": "A sequence of pathological changes takes place in Alzheimer's disease, which can be assessed in vivo using various brain imaging methods. Currently, there is no appropriate statistical model available that can easily integrate multiple imaging modalities, being able to utilize the additional information provided from the combined data. We applied Gaussian graphical models (GGMs) for analyzing the conditional dependency networks of multimodal neuroimaging data and assessed alterations of the network structure in mild cognitive impairment (MCI) and Alzheimer's dementia (AD) compared to cognitively healthy controls . Data from N=667 subjects were obtained from the Alzheimer's Disease Neuroimaging Initiative . Mean amyloid load (AV45-PET) , glucose metabolism (FDG-PET) , and gray matter volume (MRI) was calculated for each brain region. Separate GGMs were estimated using a Bayesian framework for the combined multimodal data for each diagnostic category. Graph-theoretical statistics were calculated to determine network alterations associated with disease severity. Network measures clustering coefficient , path length and small-world coefficient were significantly altered across diagnostic groups, with a biphasic u-shape trajectory , i.e. increased small-world coefficient in early MCI, intermediate values in late MCI, and decreased values in AD patients compared to controls. In contrast, no group differences were found for clustering coefficient and small-world coefficient when estimating conditional dependency networks on single imaging modalities . GGMs provide a useful methodology to analyze the conditional dependency networks of multimodal neuroimaging data .", "edit_actions": [{"type": "R", "before": "Several neuroimaging markers have been established for the early diagnosis of", "after": "A sequence of pathological changes takes place in", "start_char_pos": 0, "end_char_pos": 77}, {"type": "R", "before": "among them amyloid-beta deposition, glucose metabolism, and gray matter volume. Up to now, these imaging modalitieswere mostly analyzed separately from each other, and little is known about the regional interrelation and dependency of these markers.", "after": "which can be assessed in vivo using various brain imaging methods. Currently, there is no appropriate statistical model available that can easily integrate multiple imaging modalities, being able to utilize the additional information provided from the combined data. We applied", "start_char_pos": 99, "end_char_pos": 348}, {"type": "R", "before": "are able to estimate", "after": "for analyzing", "start_char_pos": 382, "end_char_pos": 402}, {"type": "R", "before": "between many individual random variables. We applied GGMs for studying the inter-regional associations and dependencies between multimodal imaging markers in prodromal", "after": "networks of multimodal neuroimaging data and assessed alterations of the network structure in mild cognitive impairment (MCI) and", "start_char_pos": 430, "end_char_pos": 597}, {"type": "R", "before": "disease", "after": "dementia (AD) compared to cognitively healthy controls", "start_char_pos": 610, "end_char_pos": 617}, {"type": "D", "before": "with mild cognitive impairment, dementia, and cognitively healthy controls", "after": null, "start_char_pos": 645, "end_char_pos": 719}, {"type": "R", "before": "ADNI", "after": "Alzheimer's Disease Neuroimaging Initiative", "start_char_pos": 743, "end_char_pos": 747}, {"type": "A", "before": null, "after": "(AV45-PET)", "start_char_pos": 768, "end_char_pos": 768}, {"type": "A", "before": null, "after": "(FDG-PET)", "start_char_pos": 790, "end_char_pos": 790}, {"type": "A", "before": null, "after": "(MRI)", "start_char_pos": 816, "end_char_pos": 816}, {"type": "A", "before": null, "after": "Separate", "start_char_pos": 855, "end_char_pos": 855}, {"type": "R", "before": "and for each individual diagnosis, graph-theoretical", "after": "for the combined multimodal data for each diagnostic category. Graph-theoretical", "start_char_pos": 903, "end_char_pos": 955}, {"type": "R", "before": "structural changes", "after": "network alterations", "start_char_pos": 996, "end_char_pos": 1014}, {"type": "D", "before": "Highly inter-correlated regions, e.g. adjacent regions in the same lobes, formed distinct clusters but included only regions within the same imaging modality. Hardly any associations were found between different modalities, indicating almost no conditional dependency of brain regions across modalities when considering the covariance explained by all other regions.", "after": null, "start_char_pos": 1049, "end_char_pos": 1415}, {"type": "R", "before": "and path length", "after": ", path length and small-world coefficient", "start_char_pos": 1456, "end_char_pos": 1471}, {"type": "A", "before": null, "after": ", i.e. increased small-world coefficient in early MCI, intermediate values in late MCI, and decreased values in AD patients compared to controls. In contrast, no group differences were found for clustering coefficient and small-world coefficient when estimating conditional dependency networks on single imaging modalities", "start_char_pos": 1560, "end_char_pos": 1560}, {"type": "R", "before": "GGMs showed almost no conditional dependencies between modalitieswhen at the same time considering various other regions within the same modalities. However, this approach could be used as a clustering method to derive graph statistics in future studies omitting the need to binarize the network as currently being done for connections based on Pearson correlation", "after": "GGMs provide a useful methodology to analyze the conditional dependency networks of multimodal neuroimaging data", "start_char_pos": 1563, "end_char_pos": 1927}], "sents_char_pos": [0, 178, 348, 471, 854, 1048, 1207, 1415, 1711]} {"doc_id": "1805.02411", "revision_depth": "1", "before_revision": "Uncovering modular structure in networks is fundamental for advancing the understanding of complex systems in biology, physics, engineering, and technology . Community detection provides a way to computationally identify candidate modules as hypotheses, which then need to be experimentally validated . However, validation of detected communities requires expensive and time consuming experimental methods, such as mutagenesis in a wet biological laboratory. As a consequence only a limited number of communities can be experimentally validated, and it is thus important to determine which communities to select for downstream validation and experimentation. Here we develop CRank, an automatic method for prioritizing network communities and identifying the most promising ones for further experimentation . CRank efficiently evaluates robustness and magnitude of structural features of each community and then combines these features to obtain the community prioritization. CRank can be used with any community detection method and scales to large networks . It needs only information provided by the network structure and does not require any additional metadata or labels. However, when available, CRank can incorporate domain-specific information to further boost performance. Experiments on many diverse and important biological networks demonstrate that the proposed approach effectively prioritizes communities, yielding a nearly 50-fold improvement in community prioritization over a baseline ordering of detected communities. Taken together, CRank represents a network-based approach to identify high-quality communities, even for domains at the frontier of science where supervised meta information is not available .", "after_revision": "Uncovering modular structure in networks is fundamental for systems in biology, physics, and engineering . Community detection identifies candidate modules as hypotheses, which then need to be validated through experiments, such as mutagenesis in a biological laboratory. Only a few communities can typically be validated, and it is thus important to prioritize which communities to select for downstream experimentation. Here we develop CRank, a mathematically principled approach for prioritizing network communities . CRank efficiently evaluates robustness and magnitude of structural features of each community and then combines these features into the community prioritization. CRank can be used with any community detection method . It needs only information provided by the network structure and does not require any additional metadata or labels. However, when available, CRank can incorporate domain-specific information to further boost performance. Experiments on many large networks show that CRank effectively prioritizes communities, yielding a nearly 50-fold improvement in community prioritization .", "edit_actions": [{"type": "D", "before": "advancing the understanding of complex", "after": null, "start_char_pos": 60, "end_char_pos": 98}, {"type": "R", "before": "engineering, and technology", "after": "and engineering", "start_char_pos": 128, "end_char_pos": 155}, {"type": "R", "before": "provides a way to computationally identify", "after": "identifies", "start_char_pos": 178, "end_char_pos": 220}, {"type": "R", "before": "experimentally validated . However, validation of detected communities requires expensive and time consuming experimental methods,", "after": "validated through experiments,", "start_char_pos": 276, "end_char_pos": 406}, {"type": "D", "before": "wet", "after": null, "start_char_pos": 432, "end_char_pos": 435}, {"type": "R", "before": "As a consequence only a limited number of communities can be experimentally", "after": "Only a few communities can typically be", "start_char_pos": 459, "end_char_pos": 534}, {"type": "R", "before": "determine", "after": "prioritize", "start_char_pos": 574, "end_char_pos": 583}, {"type": "D", "before": "validation and", "after": null, "start_char_pos": 627, "end_char_pos": 641}, {"type": "R", "before": "an automatic method", "after": "a mathematically principled approach", "start_char_pos": 682, "end_char_pos": 701}, {"type": "D", "before": "and identifying the most promising ones for further experimentation", "after": null, "start_char_pos": 739, "end_char_pos": 806}, {"type": "R", "before": "to obtain", "after": "into", "start_char_pos": 936, "end_char_pos": 945}, {"type": "D", "before": "and scales to large networks", "after": null, "start_char_pos": 1030, "end_char_pos": 1058}, {"type": "R", "before": "diverse and important biological networks demonstrate that the proposed approach", "after": "large networks show that CRank", "start_char_pos": 1302, "end_char_pos": 1382}, {"type": "D", "before": "over a baseline ordering of detected communities. Taken together, CRank represents a network-based approach to identify high-quality communities, even for domains at the frontier of science where supervised meta information is not available", "after": null, "start_char_pos": 1486, "end_char_pos": 1726}], "sents_char_pos": [0, 157, 302, 458, 658, 975, 1060, 1176, 1281, 1535]} {"doc_id": "1805.03275", "revision_depth": "1", "before_revision": "Ordinary least squares provides the optimal linear approximation to the true regression function under misspecification . This paper investigates the Instrumental Variables (IV) version of this problem. The resulting population parameter is called the Optimal Linear IV Approximation (OLIVA). This paper shows that a necessary condition for regular identification of the OLIVA is also sufficient for existence of an IV estimand in a linear IV model. The necessary condition holds for the important case of a binary endogenous treatment, leading also to a LATE interpretation with positive weights . The instrument in the IV estimand is unknown and is estimated in a first step . A Two-Step IV (TSIV) estimator is proposed . We establish the asymptotic normality of a debiased TSIV estimator based on locally robust moments. The TSIV estimator does not require neither completeness nor identification of the instrument. As a by-product of our analysis, we robustify the classical Hausman test for exogeneity against misspecification of the linear model. Monte Carlo simulations suggest excellent finite sample performance for the proposed inferences.", "after_revision": "Ordinary least squares provides the optimal linear approximation to the true regression function . This paper investigates the Instrumental Variables (IV) version of this problem. The resulting parameter is called the Optimal Linear IV Approximation (OLIVA). The OLIVA is invariant to the distribution of the instruments. This paper shows that a necessary condition for standard inference on the OLIVA is also sufficient for the existence of an IV estimand in a linear IV model. The necessary regularity condition holds for a binary endogenous treatment, leading also to a LATE interpretation with positive weights in a fully heterogeneous model . The instrument in the IV estimand is unknown and may not be identified . A Two-Step IV (TSIV) estimator based on a Tikhonov regularized instrument is proposed, which can be implemented by standard regression routines . We establish the asymptotic normality of the TSIV estimator assuming neither completeness nor identification of the instrument. As an important application of our analysis, we robustify the classical Hausman test for exogeneity against misspecification of the linear model. Monte Carlo simulations suggest a good finite sample performance for the proposed inferences.", "edit_actions": [{"type": "D", "before": "under misspecification", "after": null, "start_char_pos": 97, "end_char_pos": 119}, {"type": "D", "before": "population", "after": null, "start_char_pos": 217, "end_char_pos": 227}, {"type": "A", "before": null, "after": "The OLIVA is invariant to the distribution of the instruments.", "start_char_pos": 293, "end_char_pos": 293}, {"type": "R", "before": "regular identification of", "after": "standard inference on", "start_char_pos": 342, "end_char_pos": 367}, {"type": "A", "before": null, "after": "the", "start_char_pos": 401, "end_char_pos": 401}, {"type": "A", "before": null, "after": "regularity", "start_char_pos": 466, "end_char_pos": 466}, {"type": "D", "before": "the important case of", "after": null, "start_char_pos": 487, "end_char_pos": 508}, {"type": "A", "before": null, "after": "in a fully heterogeneous model", "start_char_pos": 600, "end_char_pos": 600}, {"type": "R", "before": "is estimated in a first step", "after": "may not be identified", "start_char_pos": 652, "end_char_pos": 680}, {"type": "R", "before": "is proposed", "after": "based on a Tikhonov regularized instrument is proposed, which can be implemented by standard regression routines", "start_char_pos": 714, "end_char_pos": 725}, {"type": "R", "before": "a debiased TSIV estimator based on locally robust moments. The TSIV estimator does not require", "after": "the TSIV estimator assuming", "start_char_pos": 769, "end_char_pos": 863}, {"type": "R", "before": "a by-product", "after": "an important application", "start_char_pos": 926, "end_char_pos": 938}, {"type": "R", "before": "excellent", "after": "a good", "start_char_pos": 1089, "end_char_pos": 1098}], "sents_char_pos": [0, 121, 202, 292, 451, 602, 727, 827, 922, 1056]} {"doc_id": "1805.03792", "revision_depth": "1", "before_revision": "In his seminal, three paper series, Linsker provided a mechanism for how random activity in the visual pathway could give rise to many of the features observed experimentally in the early stages of visual processing . Owing to the complexity of multilayer models, an implicit assumption in Linsker's and subsequent papers has been that propagation delay is homogeneous and , consequently, plays little functional role in neural behaviour. In this paper, we relax this assumption to examine the impact of axonal distance dependent propagation delay on neural learning. We show that propagation delay induces lowpass filtering by dispersing the arrival times of spikes from presynaptic neurons, providing a natural correlation cancellation mechanism for distal connections. The cutoff frequency decreases as the radial propagation delay within a layer , relative to propagation delay between the layers, increases, introducing an upper limit on temporal resolution. Given that the postsynaptic potential also acts as a lowpass filter, we show that the effective time constant of each should enable the processing of similar scales of temporal information. This result has implications for the visual system, in which receptive field size and, thus, radial propagation delay, increases with eccentricity. Furthermore, the network response is frequency dependent since higher frequencies require increased input amplitude to compensate for attenuation. This concords with frequency dependent contrast sensitivity in the visual system, which changes with eccentricity and receptive field size. Finally, we determine the eigenfunctions of both Linsker's network, and the network with propagation delay . We show that the addition of propagation delay stabilizes the leading eigenfunction to changes in homeostatic parameters , and hence stabilizes the resulting receptive field structure .", "after_revision": "The mechanisms underlying how activity in the visual pathway may give rise through neural plasticity to many of the features observed experimentally in the early stages of visual processing was provided by Linkser in a seminal, three-paper series . Owing to the complexity of multi-layer models, an implicit assumption in Linsker's and subsequent papers has been that propagation delay is homogeneous and plays little functional role in neural behaviour. We relax this assumption to examine the impact of distance-dependent axonal propagation delay on neural learning. We show that propagation delay induces low-pass filtering by dispersing the arrival times of spikes from presynaptic neurons, providing a natural correlation cancellation mechanism for distal connections. The cut-off frequency decreases as the radial propagation delay within a layer increases relative to propagation delay between the layers, introducing an upper limit on temporal resolution. Given that the PSP also acts as a low-pass filter, we show that the effective time constant of each should enable the processing of similar scales of temporal information. This result has implications for the visual system, in which receptive field size and, thus, radial propagation delay, increases with eccentricity. Furthermore, the network response is frequency dependent since higher frequencies require increased input amplitude to compensate for attenuation. This concords with frequency-dependent contrast sensitivity in the visual system, which changes with eccentricity and receptive field size. We further show that the proportion of inhibition relative to excitation is larger where radial propagation delay is long relative to inter-laminar propagation delay . We show that the addition of propagation delay reduces the range in the cell's on-center size, providing stability to variations in homeostatic parameters .", "edit_actions": [{"type": "R", "before": "In his seminal, three paper series, Linsker provided a mechanism for how random", "after": "The mechanisms underlying how", "start_char_pos": 0, "end_char_pos": 79}, {"type": "R", "before": "could give rise", "after": "may give rise through neural plasticity", "start_char_pos": 111, "end_char_pos": 126}, {"type": "A", "before": null, "after": "was provided by Linkser in a seminal, three-paper series", "start_char_pos": 216, "end_char_pos": 216}, {"type": "R", "before": "multilayer", "after": "multi-layer", "start_char_pos": 246, "end_char_pos": 256}, {"type": "D", "before": ", consequently,", "after": null, "start_char_pos": 374, "end_char_pos": 389}, {"type": "R", "before": "In this paper, we", "after": "We", "start_char_pos": 440, "end_char_pos": 457}, {"type": "R", "before": "axonal distance dependent", "after": "distance-dependent axonal", "start_char_pos": 505, "end_char_pos": 530}, {"type": "R", "before": "lowpass", "after": "low-pass", "start_char_pos": 608, "end_char_pos": 615}, {"type": "R", "before": "cutoff", "after": "cut-off", "start_char_pos": 777, "end_char_pos": 783}, {"type": "R", "before": ",", "after": "increases", "start_char_pos": 851, "end_char_pos": 852}, {"type": "D", "before": "increases,", "after": null, "start_char_pos": 903, "end_char_pos": 913}, {"type": "R", "before": "postsynaptic potential", "after": "PSP", "start_char_pos": 980, "end_char_pos": 1002}, {"type": "R", "before": "lowpass", "after": "low-pass", "start_char_pos": 1018, "end_char_pos": 1025}, {"type": "R", "before": "frequency dependent", "after": "frequency-dependent", "start_char_pos": 1469, "end_char_pos": 1488}, {"type": "R", "before": "Finally, we determine the eigenfunctions of both Linsker's network, and the network with propagation delay", "after": "We further show that the proportion of inhibition relative to excitation is larger where radial propagation delay is long relative to inter-laminar propagation delay", "start_char_pos": 1590, "end_char_pos": 1696}, {"type": "R", "before": "stabilizes the leading eigenfunction to changes", "after": "reduces the range in the cell's on-center size, providing stability to variations", "start_char_pos": 1746, "end_char_pos": 1793}, {"type": "D", "before": ", and hence stabilizes the resulting receptive field structure", "after": null, "start_char_pos": 1820, "end_char_pos": 1882}], "sents_char_pos": [0, 218, 439, 568, 772, 964, 1154, 1302, 1449, 1589, 1698]} {"doc_id": "1805.08831", "revision_depth": "1", "before_revision": "This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure and a number of improvements in the insertion algorithm have permitted to accelerate by a factor three reference implementations . Our second contribution is a multi-threaded version of the Delaunay kernel able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding so heavy synchronization overheads. Conflicts are managed by modification of the partition with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, Intel core-i7, Intel Xeon Phi and AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second which is, to our best knowledge, three times the rate reached by the current fastest implementation. It is finally shown how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator taking as input the boundary of the domain to mesh.", "after_revision": "This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure , an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three . Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second . We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh.", "edit_actions": [{"type": "R", "before": "and a number of improvements in", "after": ", an efficient sorting of the points and the optimization of", "start_char_pos": 270, "end_char_pos": 301}, {"type": "A", "before": null, "after": "reference implementations", "start_char_pos": 355, "end_char_pos": 355}, {"type": "D", "before": "reference implementations", "after": null, "start_char_pos": 374, "end_char_pos": 399}, {"type": "A", "before": null, "after": "that is", "start_char_pos": 477, "end_char_pos": 477}, {"type": "D", "before": "so", "after": null, "start_char_pos": 586, "end_char_pos": 588}, {"type": "R", "before": "modification of the partition", "after": "modifying the partitions", "start_char_pos": 647, "end_char_pos": 676}, {"type": "A", "before": null, "after": "an", "start_char_pos": 818, "end_char_pos": 818}, {"type": "A", "before": null, "after": "an", "start_char_pos": 834, "end_char_pos": 834}, {"type": "A", "before": null, "after": "an", "start_char_pos": 854, "end_char_pos": 854}, {"type": "R", "before": "which is, to our best knowledge, three times the rate reached by the current fastest implementation. It is finally shown", "after": ". We finally show", "start_char_pos": 1018, "end_char_pos": 1138}, {"type": "R", "before": "taking", "after": "which takes", "start_char_pos": 1253, "end_char_pos": 1259}, {"type": "A", "before": null, "after": "triangulated surface", "start_char_pos": 1273, "end_char_pos": 1273}, {"type": "R", "before": "domain", "after": "volume", "start_char_pos": 1290, "end_char_pos": 1296}], "sents_char_pos": [0, 125, 235, 401, 515, 621, 728, 938, 1118]} {"doc_id": "1805.11312", "revision_depth": "2", "before_revision": "The protein structure reconstruction from Nuclear Magnetic Resonance (NMR) experiments heavily relies on computational algorithms , such as matrix completion (MC) . Recently, some effective low-rank matrix completion methods, like ASD and ScaledASD, have been successfully applied in image processing, which inspired us to apply them in protein structure reconstruction. Here we present an efficient method for determining protein structures from experimental NMR NOESY distances , by combining the ScaledASD algorithm with several afterwards procedures including chirality refinement, distance lower (upper) bound refinement, force field-based energy minimization (EM) and water refinement. By comparing several metrics on conformation evaluation between our results and the PDBdeposits , we conclude that our method is consistent with the popular used methods. In particular, our results show higher validities in MPscore, Molprobity clash-score, Procheck dihedral angles G-factor than PDB models. In the end, we compared our calculation results with PDB models by checking the structural similarity to X-ray crystallographic structure for a special dataset. The software and its MATLAB source codes are available by URL", "after_revision": "Protein structure reconstruction from Nuclear Magnetic Resonance (NMR) experiments largely relies on computational algorithms . Recently, some effective low-rank matrix completion (MC) methods, such as ASD and ScaledASD, have been successfully applied to image processing, which inspires us to apply the methods to reconstruct protein structures. In this paper, we present an efficient method to determine protein structures based on experimental NMR NOESY distances . ScaledASD algorithm is used in the method with several post-procedures including chirality refinement, distance lower (upper) bound refinement, force field-based energy minimization (EM) and water refinement. By comparing several metrics in the conformation evaluation on our results with Protein Data Bank (PDB) structures , we conclude that our method is consistent with the popularly used methods. In particular, our results show higher validities in Procheck dihedral angles G-factor . Furthermore, we compare our calculation results with PDB structures by examining the structural similarity to X-ray crystallographic structures in a special dataset. The software and its MATLAB source codes are available in URL", "edit_actions": [{"type": "R", "before": "The protein", "after": "Protein", "start_char_pos": 0, "end_char_pos": 11}, {"type": "R", "before": "heavily", "after": "largely", "start_char_pos": 87, "end_char_pos": 94}, {"type": "D", "before": ", such as matrix completion (MC)", "after": null, "start_char_pos": 130, "end_char_pos": 162}, {"type": "R", "before": "methods, like", "after": "(MC) methods, such as", "start_char_pos": 217, "end_char_pos": 230}, {"type": "R", "before": "in", "after": "to", "start_char_pos": 281, "end_char_pos": 283}, {"type": "R", "before": "inspired", "after": "inspires", "start_char_pos": 308, "end_char_pos": 316}, {"type": "R", "before": "them in protein structure reconstruction. Here", "after": "the methods to reconstruct protein structures. In this paper,", "start_char_pos": 329, "end_char_pos": 375}, {"type": "R", "before": "for determining protein structures from", "after": "to determine protein structures based on", "start_char_pos": 407, "end_char_pos": 446}, {"type": "R", "before": ", by combining the ScaledASD algorithm with several afterwards procedures", "after": ". ScaledASD algorithm is used in the method with several post-procedures", "start_char_pos": 480, "end_char_pos": 553}, {"type": "R", "before": "on conformation evaluation between our results and the PDBdeposits", "after": "in the conformation evaluation on our results with Protein Data Bank (PDB) structures", "start_char_pos": 721, "end_char_pos": 787}, {"type": "R", "before": "popular", "after": "popularly", "start_char_pos": 841, "end_char_pos": 848}, {"type": "D", "before": "MPscore, Molprobity clash-score,", "after": null, "start_char_pos": 916, "end_char_pos": 948}, {"type": "R", "before": "than PDB models. In the end, we compared", "after": ". Furthermore, we compare", "start_char_pos": 983, "end_char_pos": 1023}, {"type": "R", "before": "models by checking", "after": "structures by examining", "start_char_pos": 1057, "end_char_pos": 1075}, {"type": "R", "before": "structure for", "after": "structures in", "start_char_pos": 1128, "end_char_pos": 1141}, {"type": "R", "before": "by", "after": "in", "start_char_pos": 1216, "end_char_pos": 1218}], "sents_char_pos": [0, 370, 691, 862, 999, 1160]} {"doc_id": "1806.03142", "revision_depth": "1", "before_revision": "Direct cDNA preamplification protocols developed for single-cell RNA-seq (scRNA-seq) have enabled transcriptome profiling of rare cells without having to pool multiple samples or to perform RNA extraction. We term this approach limiting-cell RNA-seq (lcRNA-seq) . Unlike scRNA-seq, which focuses on 'cell-atlasing', lcRNA-seq focuses on identifying differentially expressed genes (DEGs) between experimental groups. This requires accounting for systems noise which can obscure biological differences. We present CLEAR, a workflow that identifies robust transcripts in lcRNA-seq data for between-group comparisons. To develop CLEAR, we compared DEGs from RNA extracted from FACS-derived CD5+ and CD5- cells from a single chronic lymphocytic leukemia patient diluted to input RNA levels of 10-, 100- and 1,000pg. Data quality at ultralow input levels are known to be noisy. When using CLEAR transcripts vs. using all available transcripts, downstream analyses reveal more shared DEGs , improved Principal Component Analysis separation of cell type, and increased similarity between results across different input RNA amounts. CLEAR was applied to two publicly available ultralow input RNA-seq data and an in-house murine neural cell lcRNA-seq dataset . CLEAR provides a novel way to visualize the public datasetswhile validates cell phenotype markers for astrocytes, neural stem and progenitor cells .", "after_revision": "Direct cDNA preamplification protocols developed for single-cell RNA-seq have enabled transcriptome profiling of precious clinical samples and rare cells without sample pooling or RNA extraction. Currently, there is no algorithm optimized to reveal and remove noisy transcripts in limiting-cell RNA-seq (lcRNA-seq) data for downstream analyses. Herein, we present CLEAR, a workflow that identifies reliably quantifiable transcripts in lcRNA-seq data for differentially expressed gene (DEG) analysis. Libraries at three input amounts of FACS-derived CD5+ and CD5- cells from a chronic lymphocytic leukemia patient were used to develop CLEAR. When using CLEAR transcripts vs. using all transcripts, downstream analyses revealed more shared transcripts across different input RNA amounts , improved Principal Component Analysis (PCA) separation, and yielded more DEGs between cell types. As proof-of-principle, CLEAR was applied to an in-house lcRNA-seq dataset and two public datasets. When imputation is used, CLEAR is also adaptable to large clinical studies and for single cell analyses .", "edit_actions": [{"type": "D", "before": "(scRNA-seq)", "after": null, "start_char_pos": 73, "end_char_pos": 84}, {"type": "A", "before": null, "after": "precious clinical samples and", "start_char_pos": 125, "end_char_pos": 125}, {"type": "R", "before": "having to pool multiple samples or to perform", "after": "sample pooling or", "start_char_pos": 145, "end_char_pos": 190}, {"type": "R", "before": "We term this approach", "after": "Currently, there is no algorithm optimized to reveal and remove noisy transcripts in", "start_char_pos": 207, "end_char_pos": 228}, {"type": "R", "before": ". Unlike scRNA-seq, which focuses on 'cell-atlasing', lcRNA-seq focuses on identifying differentially expressed genes (DEGs) between experimental groups. This requires accounting for systems noise which can obscure biological differences. We", "after": "data for downstream analyses. Herein, we", "start_char_pos": 263, "end_char_pos": 504}, {"type": "R", "before": "robust", "after": "reliably quantifiable", "start_char_pos": 547, "end_char_pos": 553}, {"type": "R", "before": "between-group comparisons. To develop CLEAR, we compared DEGs from RNA extracted from", "after": "differentially expressed gene (DEG) analysis. Libraries at three input amounts of", "start_char_pos": 588, "end_char_pos": 673}, {"type": "D", "before": "single", "after": null, "start_char_pos": 714, "end_char_pos": 720}, {"type": "R", "before": "diluted to input RNA levels of 10-, 100- and 1,000pg. Data quality at ultralow input levels are known to be noisy.", "after": "were used to develop CLEAR.", "start_char_pos": 758, "end_char_pos": 872}, {"type": "D", "before": "available", "after": null, "start_char_pos": 916, "end_char_pos": 925}, {"type": "R", "before": "reveal more shared DEGs", "after": "revealed more shared transcripts across different input RNA amounts", "start_char_pos": 959, "end_char_pos": 982}, {"type": "R", "before": "separation of cell type, and increased similarity between results across different input RNA amounts.", "after": "(PCA) separation, and yielded more DEGs between cell types. As proof-of-principle,", "start_char_pos": 1023, "end_char_pos": 1124}, {"type": "D", "before": "two publicly available ultralow input RNA-seq data and", "after": null, "start_char_pos": 1146, "end_char_pos": 1200}, {"type": "D", "before": "murine neural cell", "after": null, "start_char_pos": 1213, "end_char_pos": 1231}, {"type": "R", "before": ". CLEAR provides a novel way to visualize the public datasetswhile validates cell phenotype markers for astrocytes, neural stem and progenitor cells", "after": "and two public datasets. When imputation is used, CLEAR is also adaptable to large clinical studies and for single cell analyses", "start_char_pos": 1250, "end_char_pos": 1398}], "sents_char_pos": [0, 206, 416, 501, 614, 811, 872, 1124]} {"doc_id": "1807.01491", "revision_depth": "1", "before_revision": "NMDA receptors (NMDA-R) typically contribute to excitatory synaptic transmission in the central nervous system. While calcium influx through NMDA-R plays a critical role in synaptic plasticity, indirect experimental evidence also exists demonstrating actions of NMDAR-mediated calcium influx on neuronal excitability through the activation of calcium-activated potassium channels. But, so far, this mechanism has not been studied theoretically. Our theoretical model provide a simple description of neuronal electrical activity including the tonic activity of NMDA receptors and a cytosolic calcium compartment. We show that calcium influx through NMDA-R can directly be coupled to activation of calcium-activated potassium channels providing an overall inhibitory effect on neuronal excitability. Furthermore, the presence of tonic NMDA-R activity promotes bistability in electrical activity by dramatically increasing the stimulus interval where both a stable steady state and repetitive firing can exist. This results could provide an intrinsic mechanism for the constitution of memory traces in neuronal circuits. They also shed light on the way by which beta-amyloids can decrease neuronal activity when interfering with NMDA-R in Alzheimer's disease .", "after_revision": "NMDA receptors (NMDA-R) typically contribute to excitatory synaptic transmission in the central nervous system. While calcium influx through NMDA-R plays a critical role in synaptic plasticity, experimental evidence indicates that NMDAR-mediated calcium influx also modifies neuronal excitability through the activation of calcium-activated potassium channels. This mechanism has not yet been studied theoretically. Our theoretical model provides a simple description of neuronal electrical activity that takes into account the tonic activity of extrasynaptic NMDA receptors and a cytosolic calcium compartment. We show that calcium influx mediated by the tonic activity of NMDA-R can be coupled directly to the activation of calcium-activated potassium channels , resulting in an overall inhibitory effect on neuronal excitability. Furthermore, the presence of tonic NMDA-R activity promotes bistability in electrical activity by dramatically increasing the stimulus interval where both a stable steady state and repetitive firing can coexist. These results could provide an intrinsic mechanism for the constitution of memory traces in neuronal circuits. They also shed light on the way by which \\beta-amyloids can alter neuronal activity when interfering with NMDA-R in Alzheimer's disease and cerebral ischemia .", "edit_actions": [{"type": "R", "before": "indirect experimental evidence also exists demonstrating actions of", "after": "experimental evidence indicates that", "start_char_pos": 194, "end_char_pos": 261}, {"type": "R", "before": "on", "after": "also modifies", "start_char_pos": 292, "end_char_pos": 294}, {"type": "R", "before": "But, so far, this", "after": "This", "start_char_pos": 381, "end_char_pos": 398}, {"type": "A", "before": null, "after": "yet", "start_char_pos": 417, "end_char_pos": 417}, {"type": "R", "before": "provide", "after": "provides", "start_char_pos": 468, "end_char_pos": 475}, {"type": "R", "before": "including", "after": "that takes into account", "start_char_pos": 529, "end_char_pos": 538}, {"type": "A", "before": null, "after": "extrasynaptic", "start_char_pos": 561, "end_char_pos": 561}, {"type": "R", "before": "through", "after": "mediated by the tonic activity of", "start_char_pos": 642, "end_char_pos": 649}, {"type": "R", "before": "directly be coupled to", "after": "be coupled directly to the", "start_char_pos": 661, "end_char_pos": 683}, {"type": "R", "before": "providing", "after": ", resulting in", "start_char_pos": 735, "end_char_pos": 744}, {"type": "R", "before": "exist. This", "after": "coexist. These", "start_char_pos": 1003, "end_char_pos": 1014}, {"type": "R", "before": "beta-amyloids can decrease", "after": "\\beta-amyloids can alter", "start_char_pos": 1161, "end_char_pos": 1187}, {"type": "A", "before": null, "after": "and cerebral ischemia", "start_char_pos": 1258, "end_char_pos": 1258}], "sents_char_pos": [0, 111, 380, 445, 613, 799, 1009, 1119]} {"doc_id": "1807.01768", "revision_depth": "1", "before_revision": "Rationale: Heart muscle contraction is activated by a synchronized systolic Ca release from sarcoplasmic reticulum (SR) via Ca sparks. In disease, Ca sparks fail to terminate, causing a diastolic Ca leak that decreases contraction amplitude and increases the risk of arrhythmia. The mechanisms and treatment of the abnormal Ca leak remain unclear. We have recently shown that spark termination emerges as collective behavior (synchronized closings) of Ca release channels (RyRs) that is identical to synchronization of spin orientation in ferromagnets described by a phase transition in Ising model . Objective: We employed the Ising approach to investigate and classify mechanisms of spark termination failure and Ca leak. Methods and Results: The key parametersdetermining whether spark termination succeeds or fails are SR Ca and RyR opening/closing rates. They define analogues of magnetic field h and the inverse temperature (\\beta) in Ising model\\beta*\\beta* \\beta* . Sparks terminate via a phase transition known as \"hpolarity reversal\" and the leak emerges when hfails to change its sign. This happens when the SR depletes insufficiently and RyR openings remained partially synchronized via Ca-induced-Ca-release, generating long-lasting sparks. Leak can also occur via \\beta, known as Onsager's \"order-disorder\" transition. This happens at low SR Ca, reducing RyR current and RyRs interactions, resulting in independent RyR openings. The disorder leak is distinguished from synchronized leak by larger Peierls contour lengths reflecting degree of disorder. Conclusions: Abnormal leak results from either a probability imbalance during synchronized firing of RyRs or from disordered RyR firing. Each leak type requires different and balanced treatment to shift RyR operation towards normal spark termination .", "after_revision": " Heart muscle contraction is normally activated by a synchronized Ca release from sarcoplasmic reticulum (SR) , a major intracellular Ca store. However, under abnormal conditions Ca leaks from the SR, decreasing heart contraction amplitude and increasing risk of life-threatening arrhythmia. The mechanisms and regimes of SR operation generating the abnormal Ca leak remain unclear. Here we employed both numerical and analytical modeling to get mechanistic insights into the emergent Ca leak phenomenon. Our numerical simulations using a detailed realistic model of Ca release unit (CRU) reveal sharp transitions resulting in Ca leak. The emergence of leak is closely mapped mathematically to the Ising model from statistical mechanics. The system steady-state behavior is determined by two aggregate parameters: the analogues of magnetic field (h) and the inverse temperature (\\beta) in the Ising model, for which we have explicit formulas in terms of SR Ca and release channel opening/closing rates. The classification of leak regimes takes the shape of a phase \\beta-h diagram, with the regime boundaries occurring at h=0 and a critical value of \\beta (\\beta*) which we estimate using a classical Ising model and mean field theory. Our theory predicts that a synchronized Ca leak will occur when h>0 and \\beta>\\beta* and a disordered leak occurs when \\beta<\\beta* and h is not too negative . The disorder leak is distinguished from synchronized leak (in long-lasting sparks) by larger Peierls contour lengths , an output parameter reflecting degree of disorder. Thus, in addition to our detailed numerical model approach we also offer an instantaneous computational tool using analytical formulas of the Ising model for respective RyR parameters and SR Ca load that describe and classify phase transitions and leak emergence .", "edit_actions": [{"type": "D", "before": "Rationale:", "after": null, "start_char_pos": 0, "end_char_pos": 10}, {"type": "A", "before": null, "after": "normally", "start_char_pos": 39, "end_char_pos": 39}, {"type": "D", "before": "systolic", "after": null, "start_char_pos": 68, "end_char_pos": 76}, {"type": "R", "before": "via Ca sparks. In disease, Ca sparks fail to terminate, causing a diastolic Ca leak that decreases", "after": ", a major intracellular Ca store. However, under abnormal conditions Ca leaks from the SR, decreasing heart", "start_char_pos": 121, "end_char_pos": 219}, {"type": "R", "before": "increases the risk of", "after": "increasing risk of life-threatening", "start_char_pos": 246, "end_char_pos": 267}, {"type": "R", "before": "treatment of", "after": "regimes of SR operation generating", "start_char_pos": 299, "end_char_pos": 311}, {"type": "R", "before": "We have recently shown that spark termination emerges as collective behavior (synchronized closings)", "after": "Here we employed both numerical and analytical modeling to get mechanistic insights into the emergent Ca leak phenomenon. Our numerical simulations using a detailed realistic model", "start_char_pos": 349, "end_char_pos": 449}, {"type": "R", "before": "channels (RyRs) that is identical to synchronization of spin orientation in ferromagnets described by a phase transition in Ising model . Objective: We employed the Ising approach to investigate and classify mechanisms of spark termination failure and Ca leak. Methods and Results: The key parametersdetermining whether spark termination succeeds or fails are SR Ca and RyR opening/closing rates. They define", "after": "unit (CRU) reveal sharp transitions resulting in Ca leak. The emergence of leak is closely mapped mathematically to the Ising model from statistical mechanics. The system steady-state behavior is determined by two aggregate parameters: the", "start_char_pos": 464, "end_char_pos": 872}, {"type": "R", "before": "h", "after": "(h)", "start_char_pos": 901, "end_char_pos": 902}, {"type": "R", "before": "Ising model", "after": "the Ising model, for which we have explicit formulas in terms of SR Ca and release channel opening/closing rates. The classification of leak regimes takes the shape of a phase \\beta-h diagram, with the regime boundaries occurring at h=0 and a critical value of \\beta (", "start_char_pos": 942, "end_char_pos": 953}, {"type": "A", "before": null, "after": ") which we estimate using a classical Ising model and mean field theory. Our theory predicts that a synchronized Ca leak will occur when h>0 and \\beta>", "start_char_pos": 959, "end_char_pos": 959}, {"type": "A", "before": null, "after": "and a disordered leak occurs when \\beta<", "start_char_pos": 966, "end_char_pos": 966}, {"type": "A", "before": null, "after": "and h is not too negative", "start_char_pos": 973, "end_char_pos": 973}, {"type": "D", "before": "Sparks terminate via a phase transition known as \"hpolarity reversal\" and the leak emerges when hfails to change its sign. This happens when the SR depletes insufficiently and RyR openings remained partially synchronized via Ca-induced-Ca-release, generating long-lasting sparks. Leak can also occur via \\beta, known as Onsager's \"order-disorder\" transition. This happens at low SR Ca, reducing RyR current and RyRs interactions, resulting in independent RyR openings.", "after": null, "start_char_pos": 976, "end_char_pos": 1444}, {"type": "A", "before": null, "after": "(in long-lasting sparks)", "start_char_pos": 1503, "end_char_pos": 1503}, {"type": "A", "before": null, "after": ", an output parameter", "start_char_pos": 1538, "end_char_pos": 1538}, {"type": "R", "before": "Conclusions: Abnormal leak results from either a probability imbalance during synchronized firing of RyRs or from disordered RyR firing. Each leak type requires different and balanced treatment to shift RyR operation towards normal spark termination", "after": "Thus, in addition to our detailed numerical model approach we also offer an instantaneous computational tool using analytical formulas of the Ising model for respective RyR parameters and SR Ca load that describe and classify phase transitions and leak emergence", "start_char_pos": 1570, "end_char_pos": 1819}], "sents_char_pos": [0, 135, 279, 348, 724, 860, 1098, 1255, 1334, 1444, 1569, 1706]} {"doc_id": "1807.02502", "revision_depth": "1", "before_revision": "In economics, it is well accepted that adoption of items is governed by the utility that a user derives from their adoption. In this paper, we propose a model called EPIC that combines utility-driven item adoption with the viral network effect helping to propagate adoption of and desire for items from users to their peers. We focus on the case of mutually complementary items and model their adoption behavior via supermodular value functions. We assume price is additive and use zero mean random noise to capture the uncertainty in our knowledge of user valuations. In this setting, we study a novel problem ofsocial welfare maximization : given item budgets, find an optimal allocation of items to seed nodes that maximizes the sum of expected utilities derived by users when the diffusion terminates . We show the expected social welfare is monotone but neither submodular nor supermodular . Nevertheless, we show that a simple greedy allocation can ensure a (1-1/e-\\epsilon) -approximation to the optimum . To the best of our knowledge, this is the first instance where for a non-submodular objective in the context of viral marketing, such a high approximation ratio is achieved. We provide the analysis of this result, which is highly nontrivial and along the way we give a solution to the prefix-preserving influence maximization problem, which could be of independent interest. With extensive\\textsf{ experiments on real and synthetic datasets, we show that our algorithm significantly outperforms all the baselines.", "after_revision": "Motivated by applications such as viral marketing, the problem of influence maximization (IM) has been extensively studied in the literature. The goal is to select a small number of users to adopt an item such that it results in a large cascade of adoptions by others. Existing works have three key limitations. (1) They do not account for economic considerations of a user in buying/adopting items. (2) Most studies on multiple items focus on competition, with complementary items receiving limited attention. (3) For the network owner, maximizing social welfare is important to ensure customer loyalty, which is not addressed in prior work in the IM literature. In this paper, we address all three limitations and propose a novel model called UIC that combines utility-driven item adoption with influence propagation over networks. Focusing on the mutually complementary setting, we formulate the problem of social welfare maximization in this novel setting . We show that while the objective function is neither submodular nor supermodular , surprisingly a simple greedy allocation algorithm achieves a factor of (1-1/e-\\epsilon) of the optimum expected social welfare. We develop\\textsf{bundleGRD , a scalable version of this approximation algorithm, and demonstrate, with comprehensive experiments on real and synthetic datasets, that it significantly outperforms all baselines.", "edit_actions": [{"type": "R", "before": "In economics, it is well accepted that adoption of items is governed by the utility that a user derives from their adoption.", "after": "Motivated by applications such as viral marketing, the problem of influence maximization (IM) has been extensively studied in the literature. The goal is to select a small number of users to adopt an item such that it results in a large cascade of adoptions by others. Existing works have three key limitations. (1) They do not account for economic considerations of a user in buying/adopting items. (2) Most studies on multiple items focus on competition, with complementary items receiving limited attention. (3) For the network owner, maximizing social welfare is important to ensure customer loyalty, which is not addressed in prior work in the IM literature.", "start_char_pos": 0, "end_char_pos": 124}, {"type": "R", "before": "propose a model called EPIC", "after": "address all three limitations and propose a novel model called UIC", "start_char_pos": 143, "end_char_pos": 170}, {"type": "R", "before": "the viral network effect helping to propagate adoption of and desire for items from users to their peers. We focus on the case of mutually complementary items and model their adoption behavior via supermodular value functions. We assume price is additive and use zero mean random noise to capture the uncertainty in our knowledge of user valuations. In this", "after": "influence propagation over networks. Focusing on the mutually complementary", "start_char_pos": 219, "end_char_pos": 576}, {"type": "D", "before": "study a novel problem of", "after": null, "start_char_pos": 589, "end_char_pos": 613}, {"type": "D", "before": "social welfare maximization", "after": null, "start_char_pos": 613, "end_char_pos": 640}, {"type": "R", "before": ": given item budgets, find an optimal allocation of items to seed nodes that maximizes the sum of expected utilities derived by users when the diffusion terminates", "after": "formulate the problem of social welfare maximization in this novel setting", "start_char_pos": 641, "end_char_pos": 804}, {"type": "R", "before": "the expected social welfare is monotone but", "after": "that while the objective function is", "start_char_pos": 815, "end_char_pos": 858}, {"type": "R", "before": ". Nevertheless, we show that", "after": ", surprisingly", "start_char_pos": 895, "end_char_pos": 923}, {"type": "R", "before": "can ensure a", "after": "algorithm achieves a factor of", "start_char_pos": 951, "end_char_pos": 963}, {"type": "R", "before": "-approximation to the optimum . To the best of our knowledge, this is the first instance where for a non-submodular objective in the context of viral marketing, such a high approximation ratio is achieved. We provide the analysis of this result, which is highly nontrivial and along the way we give a solution to the prefix-preserving influence maximization problem, which could be of independent interest. With extensive", "after": "of the optimum expected social welfare. We develop", "start_char_pos": 981, "end_char_pos": 1402}, {"type": "A", "before": null, "after": "bundleGRD", "start_char_pos": 1410, "end_char_pos": 1410}, {"type": "A", "before": null, "after": ", a scalable version of this approximation algorithm, and demonstrate, with comprehensive", "start_char_pos": 1411, "end_char_pos": 1411}, {"type": "R", "before": "we show that our algorithm", "after": "that it", "start_char_pos": 1456, "end_char_pos": 1482}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1513, "end_char_pos": 1516}], "sents_char_pos": [0, 124, 324, 445, 568, 806, 896, 1012, 1186, 1387]} {"doc_id": "1807.07590", "revision_depth": "1", "before_revision": " We present a complete wireless neural stimulation system consisting of a 2.2 mm3 wireless, batteryless, leadless implantable stimulator implant (the \"mote\"), an ultrasonic wireless link for power and bi-directional communication, and a hand-held external transceiver. The mote consists of a stimulator integrated circuit (IC) , which performs high-efficiency ultrasonic power harvesting , decodes stimulation parameter downlink data, and generates current-controlled stimulation pulses. Stimulation parameters are time-encoded on the fly through the wireless link rather than being programmed and stored on the mote, enabling complex stimulation protocols with precise timing and closed-loop capability and without the need for on-chip memory and power consumption . Uplink data indicates whether or not the mote is currently stimulating and is encoded by the mote via backscatter modulation and demodulated at the external receiver. We characterize system power, stimulation performance, and acoustic link robustness under misalignment. We implant the mote on the sciatic nerve of anesthetized rats and investigate the full range of physiological responses achievable with the wireless system .", "after_revision": "Neural stimulation is a powerful technique for modulating physiological functions and for writing information into the nervous system as part of brain-machine interfaces. Current clinically approved neural stimulators require batteries and are many cubic centimeters in size- typically much larger than their intended targets. We present a complete wireless neural stimulation system consisting of a 2.2 mm^3 wireless, batteryless, leadless implantable stimulator (the \"mote\"), an ultrasonic wireless link for power and bi-directional communication, and a hand-held external transceiver. The mote consists of a piezoceramic transducer, an energy storage capacitor, and a stimulator integrated circuit (IC) . The IC harvests ultrasonic power with high efficiency , decodes stimulation parameter downlink data, and generates current-controlled stimulation pulses. Stimulation parameters are time-encoded on the fly through the wireless link rather than being programmed and stored on the mote, enabling complex stimulation protocols with high-temporal resolution and closed-loop capability while reducing power consumption and on-chip memory requirements . Uplink data indicates whether the mote is currently stimulating ; it is encoded by the mote via backscatter modulation and is demodulated at the external transceiver. We show that the system operates at an acoustic power one fifth the FDA limit for diagnostic ultrasound and is robust to expected real-world acoustic link misalignment. We investigate the performance of the system with motes implanted on the sciatic nerve of anesthetized rats and show highly repeatable stimulation across a wide range of physiological responses .", "edit_actions": [{"type": "A", "before": null, "after": "Neural stimulation is a powerful technique for modulating physiological functions and for writing information into the nervous system as part of brain-machine interfaces. Current clinically approved neural stimulators require batteries and are many cubic centimeters in size- typically much larger than their intended targets.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "mm3", "after": "mm^3", "start_char_pos": 78, "end_char_pos": 81}, {"type": "D", "before": "implant", "after": null, "start_char_pos": 137, "end_char_pos": 144}, {"type": "A", "before": null, "after": "piezoceramic transducer, an energy storage capacitor, and a", "start_char_pos": 292, "end_char_pos": 292}, {"type": "R", "before": ", which performs high-efficiency ultrasonic power harvesting", "after": ". The IC harvests ultrasonic power with high efficiency", "start_char_pos": 328, "end_char_pos": 388}, {"type": "R", "before": "precise timing", "after": "high-temporal resolution", "start_char_pos": 663, "end_char_pos": 677}, {"type": "R", "before": "and without the need for", "after": "while reducing power consumption and", "start_char_pos": 705, "end_char_pos": 729}, {"type": "R", "before": "and power consumption", "after": "requirements", "start_char_pos": 745, "end_char_pos": 766}, {"type": "D", "before": "or not", "after": null, "start_char_pos": 799, "end_char_pos": 805}, {"type": "R", "before": "and", "after": "; it", "start_char_pos": 840, "end_char_pos": 843}, {"type": "A", "before": null, "after": "is", "start_char_pos": 898, "end_char_pos": 898}, {"type": "R", "before": "receiver. We characterize system power, stimulation performance, and acoustic link robustness under", "after": "transceiver. We show that the system operates at an acoustic power one fifth the FDA limit for diagnostic ultrasound and is robust to expected real-world acoustic link", "start_char_pos": 927, "end_char_pos": 1026}, {"type": "R", "before": "implant the mote", "after": "investigate the performance of the system with motes implanted", "start_char_pos": 1044, "end_char_pos": 1060}, {"type": "R", "before": "investigate the full", "after": "show highly repeatable stimulation across a wide", "start_char_pos": 1107, "end_char_pos": 1127}, {"type": "D", "before": "achievable with the wireless system", "after": null, "start_char_pos": 1161, "end_char_pos": 1196}], "sents_char_pos": [0, 268, 488, 936, 1040]} {"doc_id": "1807.08446", "revision_depth": "1", "before_revision": "We suggest a new optimization technique for minimizing the sum i=1^n f_i(x) of n non-convex real functions that satisfy a property that we call piecewise log-Lipschitz. This is by URLing links between techniques in computational geometry, combinatorics and convex optimization. Example applications include the first constant-factor approximation algorithms whose running-time is polynomial in n for the following fundamental problems: (i) Constrained \\ell_z Linear Regression: Given z>0, n vectors p_1,\\cdots,p_n on the plane, and a vector b\\in\\mathbb{R^n, compute a unit vector x and a permutation \\pi:}%DIFDELCMD < [%%% n%DIFDELCMD < ]\\to[n] %%% that minimizes \\sum_{i=1^n |p_ix-b_{\\pi(i)}|^z. (ii) Points-to-Lines alignment: Given n } lines \\ell_1,\\cdots,\\ell_n on the plane , compute the matching \\pi:[n]\\to[n] and alignment (rotation matrix R and a translation vector t) that minimize the sum of Euclidean distances %DIFDELCMD < \\[ i=1^n dist(Rp_i-t,\\pi(i))^z \\] %%% ^n \\mathrm{dist}(Rp_i-t,\\ell_{\\pi(i)})^z } between each point to its corresponding line. These problems are open even if z=1 and the matching \\pi is given. In this case , the running time of our algorithms reduces to O(n ) using core-sets that support: streaming, dynamic, and distributed parallel computations (e.g. on the cloud) in poly-logarithmic update time. Generalizations for handling e.g. outliers or pseudo-distances such as M-estimators for these problems are also provided. Experimental results show that our provable algorithms improve existing heuristics also in practice. A demonstration in the context of Augmented Reality show how such algorithms may be used in real-time systems.", "after_revision": "We suggest a new optimization technique for minimizing the sum i=1^n f_i(x) of n non-convex real functions that satisfy a property that we call piecewise log-Lipschitz. This is by URLing links between techniques in computational geometry, combinatorics and convex optimization. As an example application, we provide the first constant-factor approximation algorithms whose running-time is polynomial in n for the fundamental problem ofPoints-to-Lines alignment: Given n points p_1,\\cdots,p_n ^n, compute a unit vector x and a permutation \\pi:}%DIFDELCMD < [%%% %DIFDELCMD < ]\\to[n] %%% ^n |p_ix-b_{\\pi(i)}|^z. (ii) Points-to-Lines alignment: Given n } and n lines \\ell_1,\\cdots,\\ell_n on the plane and z>0 , compute the matching \\pi:[n]\\to[n] and alignment (rotation matrix R and a translation vector t) that minimize the sum of Euclidean distances %DIFDELCMD < \\[ i=1^n dist(Rp_i-t,\\pi(i))^z \\] %%% \\sum_{i=1^n \\mathrm{dist}(Rp_i-t,\\ell_{\\pi(i)})^z } between each point to its corresponding line. This problem is non-trivial even if z=1 and the matching \\pi is given. If \\pi is given , the running time of our algorithms is O(n ^3), and even near-linear in n using core-sets that support: streaming, dynamic, and distributed parallel computations in poly-logarithmic update time. Generalizations for handling e.g. outliers or pseudo-distances such as M-estimators for the problem are also provided. Experimental results and open source code show that our provable algorithms improve existing heuristics also in practice. A companion demonstration video in the context of Augmented Reality shows how such algorithms may be used in real-time systems.", "edit_actions": [{"type": "R", "before": "Example applications include", "after": "As an example application, we provide", "start_char_pos": 278, "end_char_pos": 306}, {"type": "R", "before": "following fundamental problems: (i) Constrained \\ell_z Linear Regression: Given z>0, n vectors", "after": "fundamental problem of", "start_char_pos": 404, "end_char_pos": 498}, {"type": "A", "before": null, "after": "Points-to-Lines alignment", "start_char_pos": 498, "end_char_pos": 498}, {"type": "A", "before": null, "after": ": Given n points", "start_char_pos": 498, "end_char_pos": 498}, {"type": "D", "before": "on the plane, and a vector b\\in\\mathbb{R", "after": null, "start_char_pos": 514, "end_char_pos": 554}, {"type": "D", "before": "n", "after": null, "start_char_pos": 623, "end_char_pos": 624}, {"type": "D", "before": "that minimizes \\sum_{i=1", "after": null, "start_char_pos": 649, "end_char_pos": 673}, {"type": "A", "before": null, "after": "and n", "start_char_pos": 739, "end_char_pos": 739}, {"type": "A", "before": null, "after": "and z>0", "start_char_pos": 780, "end_char_pos": 780}, {"type": "A", "before": null, "after": "\\sum_{i=1", "start_char_pos": 975, "end_char_pos": 975}, {"type": "R", "before": "These problems are open", "after": "This problem is non-trivial", "start_char_pos": 1064, "end_char_pos": 1087}, {"type": "R", "before": "In this case", "after": "If \\pi is given", "start_char_pos": 1131, "end_char_pos": 1143}, {"type": "R", "before": "reduces to", "after": "is", "start_char_pos": 1181, "end_char_pos": 1191}, {"type": "R", "before": ")", "after": "^3), and even near-linear in n", "start_char_pos": 1196, "end_char_pos": 1197}, {"type": "D", "before": "(e.g. on the cloud)", "after": null, "start_char_pos": 1286, "end_char_pos": 1305}, {"type": "R", "before": "these problems", "after": "the problem", "start_char_pos": 1427, "end_char_pos": 1441}, {"type": "A", "before": null, "after": "and open source code", "start_char_pos": 1482, "end_char_pos": 1482}, {"type": "R", "before": "demonstration", "after": "companion demonstration video", "start_char_pos": 1565, "end_char_pos": 1578}, {"type": "R", "before": "show", "after": "shows", "start_char_pos": 1615, "end_char_pos": 1619}], "sents_char_pos": [0, 168, 277, 435, 477, 604, 1063, 1130, 1338, 1460, 1562]} {"doc_id": "1807.11090", "revision_depth": "1", "before_revision": "Reconstituted nicotinic acetylcholine receptors ( nAChR ) exhibit significant gain-of-function upon addition of cholesterol to reconstitution mixtures, and experimentally observed clustering of nAChRs within membranes is cholesterol-sensitive, leading to a common expectation that nAChRs will partition into cholesterol-rich liquid ordered (\"raft\" ) domains . We use coarse-grained molecular dynamics simulations to observe spontaneous interactions of cholesterol, saturated lipids, and polyunsaturated lipids with nAChRs. In binary DPPC:CHOL mixtures, both cholesterol and DPPC acyl chains were observed spontaneously entering deep \"non-annular\" cavities in the nAChR TMD, particularly at the subunit interface and the \\beta subunit center, facilitated by the low amino acid density in the cryo-EM structure of nAChR in a native membrane. Cholesterol was highly enriched in the annulus around the TMD, but this was a short range effect that extended over (at most) 5-10\\AA. In ternary mixtures containing polyunsaturated fatty acids , the presence of a single receptor did not significantly affect the likelihood of domain formation , and nAChR partitioned to the cholesterol-poor liquid-disordered domain whenever one was present. This was independent of whether the \\l_{do domain lipids had PC or PE headgroups , but saturated lipids with PE headgroups were less depleted from the boundary than those with PC headgroups . Enrichment of unsaturated phospholipids among boundary lipids was positively correlated with their propensity for demixing from cholesterol-rich phases. Long n-3 chains (tested here with Docosahexaenoic Acid ) were highly enriched in annular and non-annular embedded sites, partially displacing cholesterol and completely displacing DPPC, and occupying sites even deeper within the bundle. Shorter n-6 chains (Linoleic acid) were far less effective at displacing cholesterol from non-annular sites.", "after_revision": "Reconstituted nicotinic acetylcholine receptors ( nAChRs ) exhibit significant gain-of-function upon addition of cholesterol to reconstitution mixtures, and cholesterol URLanization of nAChRs within domain-forming membranes, but whether nAChR partitions to cholesterol-rich liquid-ordered (\"raft\" or l_o) domains or cholesterol-poor liquid-disordered (l_{do . We use coarse-grained molecular dynamics simulations to observe spontaneous interactions of cholesterol, saturated lipids, and polyunsaturated (PUFA) lipids with nAChRs. In binary Dipalmitoylphosphatidylcholine:Cholesterol ( DPPC:CHOL ) mixtures, both CHOL and DPPC acyl chains were observed spontaneously entering deep \"non-annular\" cavities in the nAChR TMD, particularly at the subunit interface and the \\beta subunit center, facilitated by the low amino acid density in the cryo-EM structure of nAChR in a native membrane. Cholesterol was highly enriched in the annulus around the TMD, but this effect extended over (at most) 5-10\\AA. In domain-forming ternary mixtures containing PUFAs , the presence of a single receptor did not significantly affect the likelihood of domain formation . nAChR partitioned to any cholesterol-poor l_{do of whether the l_{do domain lipids had PC or PE headgroups . Enrichment of PUFAs among boundary lipids was positively correlated with their propensity for demixing from cholesterol-rich phases. Long n - 3 chains (tested here with Docosahexaenoic Acid , DHA ) were highly enriched in annular and non-annular embedded sites, partially displacing cholesterol and completely displacing DPPC, and occupying sites even deeper within the bundle. Shorter n - 6 chains were far less effective at displacing cholesterol from non-annular sites.", "edit_actions": [{"type": "R", "before": "nAChR", "after": "nAChRs", "start_char_pos": 50, "end_char_pos": 55}, {"type": "R", "before": "experimentally observed clustering", "after": "cholesterol URLanization", "start_char_pos": 156, "end_char_pos": 190}, {"type": "R", "before": "membranes is cholesterol-sensitive, leading to a common expectation that nAChRs will partition into", "after": "domain-forming membranes, but whether nAChR partitions to", "start_char_pos": 208, "end_char_pos": 307}, {"type": "R", "before": "liquid ordered", "after": "liquid-ordered", "start_char_pos": 325, "end_char_pos": 339}, {"type": "R", "before": ") domains", "after": "or l_o) domains or cholesterol-poor liquid-disordered (l_{do", "start_char_pos": 348, "end_char_pos": 357}, {"type": "A", "before": null, "after": "(PUFA)", "start_char_pos": 503, "end_char_pos": 503}, {"type": "A", "before": null, "after": "Dipalmitoylphosphatidylcholine:Cholesterol (", "start_char_pos": 534, "end_char_pos": 534}, {"type": "A", "before": null, "after": ")", "start_char_pos": 545, "end_char_pos": 545}, {"type": "R", "before": "cholesterol", "after": "CHOL", "start_char_pos": 561, "end_char_pos": 572}, {"type": "R", "before": "was a short range effect that", "after": "effect", "start_char_pos": 915, "end_char_pos": 944}, {"type": "A", "before": null, "after": "domain-forming", "start_char_pos": 981, "end_char_pos": 981}, {"type": "R", "before": "polyunsaturated fatty acids", "after": "PUFAs", "start_char_pos": 1010, "end_char_pos": 1037}, {"type": "R", "before": ", and", "after": ".", "start_char_pos": 1138, "end_char_pos": 1143}, {"type": "R", "before": "the", "after": "any", "start_char_pos": 1165, "end_char_pos": 1168}, {"type": "R", "before": "liquid-disordered domain whenever one was present. This was independent", "after": "l_{do", "start_char_pos": 1186, "end_char_pos": 1257}, {"type": "R", "before": "\\l_{do", "after": "l_{do", "start_char_pos": 1273, "end_char_pos": 1279}, {"type": "D", "before": ", but saturated lipids with PE headgroups were less depleted from the boundary than those with PC headgroups", "after": null, "start_char_pos": 1318, "end_char_pos": 1426}, {"type": "R", "before": "unsaturated phospholipids", "after": "PUFAs", "start_char_pos": 1443, "end_char_pos": 1468}, {"type": "R", "before": "n-3", "after": "n - 3", "start_char_pos": 1587, "end_char_pos": 1590}, {"type": "A", "before": null, "after": ", DHA", "start_char_pos": 1637, "end_char_pos": 1637}, {"type": "R", "before": "n-6 chains (Linoleic acid)", "after": "n - 6 chains", "start_char_pos": 1828, "end_char_pos": 1854}], "sents_char_pos": [0, 359, 523, 842, 977, 1236, 1581, 1819]} {"doc_id": "1808.01279", "revision_depth": "1", "before_revision": "URLanisms are exposed to neural damage throughout their lives, which can reduce their ability to carry out core behavioral tasks . Their neural circuits need to maintain functionality despite injury, which often requires that key readout signals from the system be preserved . In this work, we explore whether and how certain structural and functional network motifs act as injury mitigation mechanisms by protecting these readouts . Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) presence of parallel inhibitory and excitatory connections contribute to robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network in which structures-under-test are parametrically varied. Neuronal impairments are modeled as focal axonal swellings, an axonal pathology observed across all severities of traumatic brain injuries and present in other leading brain disorders. Axonal swellings effectively compromise spike train propagation along the axon and consequently reduce the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury's impact on readout neuron responses or by restoring these responses to pre-injury levels through reinforcement learning . These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, these experiments suggest that robustness to injury is a vital design principle to consider when analyzing biological neural systems. Indeed, it is possible that some neural structures and mechanisms, including the ability to learn, are best understood as evolutionary solutions to the challenge of maintaining system function despite injury.", "after_revision": "URLanisms suffer neuronal damage throughout their lives, which can impair performance of core behaviors . Their neural circuits need to maintain function despite injury, which in particular requires preserving key system outputs . In this work, we explore whether and how certain structural and functional neuronal network motifs act as injury mitigation mechanisms . Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) parallel inhibitory and excitatory connections contribute to the robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network calibrated to in vivo data. The injuries are modeled on focal axonal swellings, a ubiquitous form of axonal pathology observed in traumatic brain injuries and other brain disorders. Axonal swellings effectively compromise spike train propagation along the axon , reducing the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury's impact on readout neuron responses or by restoring these responses to pre-injury levels . These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, robustness to injury is a vital design principle to consider when analyzing neural systems. Indeed, it is possible that some neural structures and mechanisms, including the ability to learn, are best understood as evolutionary solutions to the challenge of maintaining system function despite injury.", "edit_actions": [{"type": "R", "before": "are exposed to neural", "after": "suffer neuronal", "start_char_pos": 10, "end_char_pos": 31}, {"type": "R", "before": "reduce their ability to carry out core behavioral tasks", "after": "impair performance of core behaviors", "start_char_pos": 73, "end_char_pos": 128}, {"type": "R", "before": "functionality", "after": "function", "start_char_pos": 170, "end_char_pos": 183}, {"type": "R", "before": "often requires that key readout signals from the system be preserved", "after": "in particular requires preserving key system outputs", "start_char_pos": 206, "end_char_pos": 274}, {"type": "A", "before": null, "after": "neuronal", "start_char_pos": 352, "end_char_pos": 352}, {"type": "D", "before": "by protecting these readouts", "after": null, "start_char_pos": 404, "end_char_pos": 432}, {"type": "D", "before": "presence of", "after": null, "start_char_pos": 523, "end_char_pos": 534}, {"type": "A", "before": null, "after": "the", "start_char_pos": 596, "end_char_pos": 596}, {"type": "R", "before": "in which structures-under-test are parametrically varied. Neuronal impairments are modeled as", "after": "calibrated to in vivo data. The injuries are modeled on", "start_char_pos": 744, "end_char_pos": 837}, {"type": "R", "before": "an", "after": "a ubiquitous form of", "start_char_pos": 862, "end_char_pos": 864}, {"type": "R", "before": "across all severities of", "after": "in", "start_char_pos": 891, "end_char_pos": 915}, {"type": "R", "before": "present in other leading", "after": "other", "start_char_pos": 945, "end_char_pos": 969}, {"type": "R", "before": "and consequently reduce", "after": ", reducing", "start_char_pos": 1066, "end_char_pos": 1089}, {"type": "D", "before": "through reinforcement learning", "after": null, "start_char_pos": 1378, "end_char_pos": 1408}, {"type": "D", "before": "these experiments suggest that", "after": null, "start_char_pos": 1563, "end_char_pos": 1593}, {"type": "D", "before": "biological", "after": null, "start_char_pos": 1670, "end_char_pos": 1680}], "sents_char_pos": [0, 130, 276, 658, 801, 986, 1155, 1410, 1546, 1696]} {"doc_id": "1808.10023", "revision_depth": "1", "before_revision": "Biomolecular condensates undergirded by phase separations of proteins and nucleic acids serve crucial biological functions. To gain physical insights into their genetic basis, we study how liquid-liquid phase separation (LLPS) of intrinsically disordered proteins (IDPs) depends on their sequence charge patterns using a coarse-grained model wherein each amino acid residue is represented by a bead on a chain configured in real space. Phase diagrams are determined by Langevin dynamics. Charge patterns are characterized by the `blockiness' measure \\kappa and the `sequence charge decoration' (SCD) parameter. Consistent with random phase approximation (RPA) theory and lattice simulations, LLPS propensity ( critical temperature T^*_{\\rm cr} ) increases with increasingly negative SCD for a set of sequences showing a positive correlation between \\kappa and -SCD. Relative to RPA predictions, however, the simulated sequence-dependent variation in T^*_{\\rm cr} is often---though not always---smaller, whereas the simulated critical volume fractions are higher. In contrast , for a set of sequences exhibiting an anti-correlation between \\kappa and -SCD, the simulated T^*_{\\rm cr}'s are---contrary to RPA prediction---quite insensitive to \\kappa or SCD . Additionally, we find that the LLPS propensity of IDPs with a strictly alternating charge sequence might have been overestimated because stabilizing interactions among such sequences require spatial alignments that are more difficult to achieve in real space than stipulated by RPA or lattice models. Taken together, our results underscore both the utility and limitations of RPA as well as the \\kappa and SCD parameters . Further efforts will be needed to rationalize the newly observed subtleties.", "after_revision": "Biomolecular condensates undergirded by phase separations of proteins and nucleic acids serve crucial biological functions. To gain physical insights into their genetic basis, we study how liquid-liquid phase separation (LLPS) of intrinsically disordered proteins (IDPs) depends on their sequence charge patterns using a continuum Langevin chain model wherein each amino acid residue is represented by a single bead. Charge patterns are characterized by the `blockiness' measure \\kappa and the `sequence charge decoration' (SCD) parameter. Consistent with random phase approximation (RPA) theory and lattice simulations, LLPS propensity as characterized by critical temperature T^*_{\\rm cr} increases with increasingly negative SCD for a set of sequences showing a positive correlation between \\kappa and -SCD. Relative to RPA , the simulated sequence-dependent variation in T^*_{\\rm cr} is often---though not always---smaller, whereas the simulated critical volume fractions are higher. However , for a set of sequences exhibiting an anti-correlation between \\kappa and -SCD, the simulated T^*_{\\rm cr}'s are quite insensitive to either parameters . Additionally, we find that blocky sequences that allow for strong electrostatic repulsion can lead to coexistence curves with upward concavity as stipulated by RPA, but the LLPS propensity of a strictly alternating charge sequence was likely overestimated by RPA and lattice models because interchain stabilization of this sequence requires spatial alignments that are difficult to achieve in real space . These results help delineate the utility and limitations of the charge pattern parameters and of RPA, pointing to further efforts necessary for rationalizing the newly observed subtleties.", "edit_actions": [{"type": "R", "before": "coarse-grained", "after": "continuum Langevin chain", "start_char_pos": 321, "end_char_pos": 335}, {"type": "R", "before": "bead on a chain configured in real space. Phase diagrams are determined by Langevin dynamics.", "after": "single bead.", "start_char_pos": 394, "end_char_pos": 487}, {"type": "R", "before": "(", "after": "as characterized by", "start_char_pos": 708, "end_char_pos": 709}, {"type": "D", "before": ")", "after": null, "start_char_pos": 744, "end_char_pos": 745}, {"type": "R", "before": "predictions, however,", "after": ",", "start_char_pos": 882, "end_char_pos": 903}, {"type": "R", "before": "In contrast", "after": "However", "start_char_pos": 1063, "end_char_pos": 1074}, {"type": "R", "before": "are---contrary to RPA prediction---quite insensitive to \\kappa or SCD", "after": "are quite insensitive to either parameters", "start_char_pos": 1185, "end_char_pos": 1254}, {"type": "A", "before": null, "after": "blocky sequences that allow for strong electrostatic repulsion can lead to coexistence curves with upward concavity as stipulated by RPA, but", "start_char_pos": 1284, "end_char_pos": 1284}, {"type": "D", "before": "IDPs with", "after": null, "start_char_pos": 1308, "end_char_pos": 1317}, {"type": "R", "before": "might have been overestimated because stabilizing interactions among such sequences require", "after": "was likely overestimated by RPA and lattice models because interchain stabilization of this sequence requires", "start_char_pos": 1357, "end_char_pos": 1448}, {"type": "D", "before": "more", "after": null, "start_char_pos": 1477, "end_char_pos": 1481}, {"type": "R", "before": "than stipulated by RPA or lattice models. Taken together, our results underscore both", "after": ". These results help delineate", "start_char_pos": 1517, "end_char_pos": 1602}, {"type": "R", "before": "RPA as well as the \\kappa and SCD parameters . Further efforts will be needed to rationalize", "after": "the charge pattern parameters and of RPA, pointing to further efforts necessary for rationalizing", "start_char_pos": 1634, "end_char_pos": 1726}], "sents_char_pos": [0, 123, 435, 487, 610, 865, 1062, 1558, 1680]} {"doc_id": "1809.06156", "revision_depth": "1", "before_revision": "Path analysis is a special class of models in structural equation modeling (SEM) where it describes causal relations among measured variables in a form of linear regression. This paper presents two estimation formulations for confirmatory and exploratory SEM in path analysis problems where a zero pattern of the estimated path coefficient matrix explains a causality structure of the variables. In confirmatory SEM, the original nonlinear equality constraints of model parameters are relaxed to an inequality, allowing us to transform the original problem into a convex problem . A regularized estimation formulation is proposed for exploratory SEM , where the objective function is added with an l1-type penalty of the path coefficient matrix. Under a condition on problem parameters, we show that our optimal solution is low rank and provides an estimate of the path matrix of the original problem. To solve our estimation problems in a convex framework, we apply alternating direction method of multiplier (ADMM) which is shown to be suitable for a large-scale implementation. In combination with applying model selection criteria, the penalty parameter in the regularized estimation, controlling the density of nonzero entries in the path matrix, can be chosen to provide a reasonable trade-off between the model fitting and the complexity of causality structure. The performance of our approach is demonstrated in both simulated and real data sets, and with a comparison of existing methods. Real application results include learning causality among climate variables in Thailand where our findings can explain known relations among air pollutants and weather variables. The other experiment is to explore connectivities among brain regions using fMRI time series from ABIDE data sets where our results are interpreted to explain brain network differencesin autism patients .", "after_revision": "Path analysis is a model class of structural equation modeling (SEM) , which it describes causal relations among measured variables in the form of a multiple linear regression. This paper presents two estimation formulations , one each for confirmatory and exploratory SEM , where a zero pattern of the estimated path coefficient matrix can explain a causality structure of the variables. The original nonlinear equality constraints of the model parameters were relaxed to an inequality, allowing the transformation of the original problem into a convex framework . A regularized estimation formulation was then proposed for exploratory SEM using an l1-type penalty of the path coefficient matrix. Under a condition on problem parameters, our optimal solution is low rank and provides a useful solution to the original problem. Proximal algorithms were applied to solve our convex programs in a large-scale setting. The performance of this approach was demonstrated in both simulated and real data sets, and in comparison with an existing method. When applied to two real application results ( learning causality among climate variables in Thailand and examining connectivity differences in autism patients using fMRI time series from ABIDE data sets ) the findings could explain known relationships among environmental variables and discern known and new brain connectivity differences, respectively .", "edit_actions": [{"type": "R", "before": "special class of models in", "after": "model class of", "start_char_pos": 19, "end_char_pos": 45}, {"type": "R", "before": "where", "after": ", which", "start_char_pos": 81, "end_char_pos": 86}, {"type": "R", "before": "a form of", "after": "the form of a multiple", "start_char_pos": 145, "end_char_pos": 154}, {"type": "A", "before": null, "after": ", one each", "start_char_pos": 222, "end_char_pos": 222}, {"type": "R", "before": "in path analysis problems", "after": ",", "start_char_pos": 260, "end_char_pos": 285}, {"type": "R", "before": "explains", "after": "can explain", "start_char_pos": 348, "end_char_pos": 356}, {"type": "R", "before": "In confirmatory SEM, the", "after": "The", "start_char_pos": 397, "end_char_pos": 421}, {"type": "R", "before": "model parameters are", "after": "the model parameters were", "start_char_pos": 465, "end_char_pos": 485}, {"type": "R", "before": "us to transform the", "after": "the transformation of the", "start_char_pos": 521, "end_char_pos": 540}, {"type": "R", "before": "problem", "after": "framework", "start_char_pos": 572, "end_char_pos": 579}, {"type": "R", "before": "is", "after": "was then", "start_char_pos": 619, "end_char_pos": 621}, {"type": "R", "before": ", where the objective function is added with", "after": "using", "start_char_pos": 651, "end_char_pos": 695}, {"type": "D", "before": "we show that", "after": null, "start_char_pos": 788, "end_char_pos": 800}, {"type": "R", "before": "an estimate of the path matrix of the", "after": "a useful solution to the", "start_char_pos": 847, "end_char_pos": 884}, {"type": "R", "before": "To solve our estimation problems in a convex framework, we apply alternating direction method of multiplier (ADMM) which is shown to be suitable for a", "after": "Proximal algorithms were applied to solve our convex programs in a", "start_char_pos": 903, "end_char_pos": 1053}, {"type": "R", "before": "implementation. In combination with applying model selection criteria, the penalty parameter in the regularized estimation, controlling the density of nonzero entries in the path matrix, can be chosen to provide a reasonable trade-off between the model fitting and the complexity of causality structure.", "after": "setting.", "start_char_pos": 1066, "end_char_pos": 1369}, {"type": "R", "before": "our approach is", "after": "this approach was", "start_char_pos": 1389, "end_char_pos": 1404}, {"type": "R", "before": "with a comparison of existing methods. Real application results include", "after": "in comparison with an existing method. When applied to two real application results (", "start_char_pos": 1460, "end_char_pos": 1531}, {"type": "R", "before": "where our findings can explain known relations among air pollutants and weather variables. The other experiment is to explore connectivities among brain regions", "after": "and examining connectivity differences in autism patients", "start_char_pos": 1587, "end_char_pos": 1747}, {"type": "R", "before": "where our results are interpreted to explain brain network differencesin autism patients", "after": ") the findings could explain known relationships among environmental variables and discern known and new brain connectivity differences, respectively", "start_char_pos": 1792, "end_char_pos": 1880}], "sents_char_pos": [0, 173, 396, 581, 746, 902, 1081, 1369, 1498, 1677]} {"doc_id": "1809.08959", "revision_depth": "1", "before_revision": "Multi-site studies are becoming important to increase statistical power, enhance generalizability, and to improve the likelihood of pooling relevant subgroups together activities which are otherwise limited by the availability of patients or funds at a single site . Even with harmonized imaging sequences, site-dependent variability can mask the advantages of these multi-site studies. The aim of this study was to assess multi-site reproducibility in resting-state functional connectivity fingerprints, and to improve identifiability of obtained functional connectomes. The individual fingerprinting of functional connectivity profiles is promising due to its potential as a robust neuroimaging biomarker with which to draw single-subject inferences . We evaluated individual fingerprints in test-retest visit pairs within and across two sites and present a generalized framework based on principal component analysis to improve identifiability. Those principal components that maximized differential identifiability of a training dataset were used as an orthogonal connectivity basis to reconstruct the individual functional connectomes of training and validation sets. The optimally reconstructed functional connectomes showed a substantial improvement in individual fingerprinting of the subjects within and across the two sites and test-retest visit pairs relative to the original data. A notable increase in ICC values for functional edges and resting-state networks were also observedfor reconstructed functional connectomes . Improvements in identifiability were not found to be affected by the presence or absence of global signal regression. Result demonstrate that the data-driven method presented in this study can improve identifiability in resting-state functional connectomes in multi-site studies.", "after_revision": "Multi-site studies are becoming important to increase statistical power, enhance generalizability, and to improve the likelihood of pooling relevant subgroups together activities . Even with harmonized imaging sequences, site-dependent variability can mask the advantages of these multi-site studies. The aim of this study was to assess multi-site reproducibility in resting-state functional connectivity fingerprints, and to improve identifiability of functional connectomes. The individual fingerprinting of functional connectivity profiles is promising due to its potential as a robust neuroimaging biomarker . We evaluated , on two independent multi-site datasets, individual fingerprints in test-retest visit pairs within and across two sites and present a generalized framework based on principal component analysis to improve identifiability. Those components that maximized differential identifiability of a training dataset were used as an orthogonal connectivity basis to reconstruct the functional connectomes of training and validation sets. The optimally reconstructed functional connectomes showed a substantial improvement in individual fingerprinting within and across the two sites relative to the original data. A notable increase in ICC values for functional edges and resting-state networks was also observed . Improvements in identifiability were not found to be affected by global signal regression. Post-hoc analyses assessed the effect of the number of fMRI volumes on identifiability and showed that multi-site differential identifiability was for all cases maximized after optimal reconstruction. The generalizability of the optimal set of orthogonal basis of each dataset was evaluated through a leave-one-out procedure. Overall, results demonstrate that the framework presented in this study systematically improves identifiability in resting-state functional connectomes in multi-site studies.", "edit_actions": [{"type": "D", "before": "which are otherwise limited by the availability of patients or funds at a single site", "after": null, "start_char_pos": 179, "end_char_pos": 264}, {"type": "D", "before": "obtained", "after": null, "start_char_pos": 539, "end_char_pos": 547}, {"type": "D", "before": "with which to draw single-subject inferences", "after": null, "start_char_pos": 707, "end_char_pos": 751}, {"type": "A", "before": null, "after": ", on two independent multi-site datasets,", "start_char_pos": 767, "end_char_pos": 767}, {"type": "D", "before": "principal", "after": null, "start_char_pos": 955, "end_char_pos": 964}, {"type": "D", "before": "individual", "after": null, "start_char_pos": 1107, "end_char_pos": 1117}, {"type": "D", "before": "of the subjects", "after": null, "start_char_pos": 1287, "end_char_pos": 1302}, {"type": "D", "before": "and test-retest visit pairs", "after": null, "start_char_pos": 1335, "end_char_pos": 1362}, {"type": "R", "before": "were also observedfor reconstructed functional connectomes", "after": "was also observed", "start_char_pos": 1475, "end_char_pos": 1533}, {"type": "D", "before": "the presence or absence of", "after": null, "start_char_pos": 1601, "end_char_pos": 1627}, {"type": "R", "before": "Result", "after": "Post-hoc analyses assessed the effect of the number of fMRI volumes on identifiability and showed that multi-site differential identifiability was for all cases maximized after optimal reconstruction. The generalizability of the optimal set of orthogonal basis of each dataset was evaluated through a leave-one-out procedure. Overall, results", "start_char_pos": 1654, "end_char_pos": 1660}, {"type": "R", "before": "data-driven method", "after": "framework", "start_char_pos": 1682, "end_char_pos": 1700}, {"type": "R", "before": "can improve", "after": "systematically improves", "start_char_pos": 1725, "end_char_pos": 1736}], "sents_char_pos": [0, 266, 386, 571, 753, 948, 1173, 1393, 1535, 1653]} {"doc_id": "1810.01174", "revision_depth": "2", "before_revision": "Statistical analysis of functional images (signals) allows the assessment of neural connectivity patterns at mesoscopic scale. Scalp fields, recorded with Magneto-Electro Encephalography MEEG, strongly correlate to the average effect of synaptic activity, i.e. Primary Current Density (PCD) at the Gray Matter space. Nevertheless, what is observed at the sensor space with either technique is a superposition of projected fields, from the whole Gray-Matter PCD. That is the reason for a major pitfall of MEEG analysis methods, distorted reconstruction of source activity and its connectivity or \"Leakage\". There has been a quest for the modelling to counterattack Leakage of both activation and connectivity. It has been proven that current methods produce incorrect connectomes , also related to incorrect connectivity modelling : they disregard either System Theory and Bayesian Formalism . We introduce a system theoretic formalism of MEEG connectivity alongside its Bayesian inference: Hidden Hermitian Gaussian Graphical Model (H-HGGM). This is a source (state) GGM hidden by the MEEG observation equation , i. e. equivalent to a frequency domain Linear State Space Model (LSSM) . It is unhidden due to the Type II Likelihood approximated representation: Expected Log-Likelihood (ELL). The essential contributions here are a theory for HGGM solvers, and its implementation for the subsequent Maximization step of the H-HGGM ELL. Its efficacy is demonstrated in high resolution EEG simulations and a Steady State Visual Evoked potential. Open source packages , to reproduce the results presented in this paper and to analyze external MEEG databases , are freely available online .", "after_revision": "The noninvasive procedures for neural connectivity are under questioning. Theoretical models sustain that the electromagnetic field registered at external sensors is elicited by currents at neural space. Nevertheless, what we observe at the sensor space is a superposition of projected fields, from the whole gray-matter. This is the reason for a major pitfall of noninvasive Electrophysiology methods: distorted reconstruction of neural activity and its connectivity or leakage. It has been proven that current methods produce incorrect connectomes . Somewhat related to the incorrect connectivity modelling , they disregard either Systems Theory and Bayesian Information Theory . We introduce a new formalism that attains for it, Hidden Gaussian Graphical State-Model (HIGGS). A neural Gaussian Graphical Model (GGM) hidden by the observation equation of Magneto-encephalographic (MEEG) signals. HIGGS is equivalent to a frequency domain Linear State Space Model (LSSM) but with sparse connectivity prior. The mathematical contribution here is the theory for high-dimensional and frequency-domain HIGGS solvers. We demonstrate that HIGGS can attenuate the leakage effect in the most critical case: the distortion EEG signal due to head volume conduction heterogeneities. Its application in EEG is illustrated with retrieved connectivity patterns from human Steady State Visual Evoked Potentials (SSVEP). We provide for the first time confirmatory evidence for noninvasive procedures of neural connectivity: concurrent EEG and Electrocorticography (ECoG) recordings on monkey. Open source packages are freely available online , to reproduce the results presented in this paper and to analyze external MEEG databases .", "edit_actions": [{"type": "R", "before": "Statistical analysis of functional images (signals) allows the assessment of neural connectivity patterns at mesoscopic scale. Scalp fields, recorded with Magneto-Electro Encephalography MEEG, strongly correlate to the average effect of synaptic activity, i.e. Primary Current Density (PCD) at the Gray Matter", "after": "The noninvasive procedures for neural connectivity are under questioning. Theoretical models sustain that the electromagnetic field registered at external sensors is elicited by currents at neural", "start_char_pos": 0, "end_char_pos": 309}, {"type": "R", "before": "is observed", "after": "we observe", "start_char_pos": 336, "end_char_pos": 347}, {"type": "D", "before": "with either technique", "after": null, "start_char_pos": 368, "end_char_pos": 389}, {"type": "R", "before": "Gray-Matter PCD. That", "after": "gray-matter. This", "start_char_pos": 445, "end_char_pos": 466}, {"type": "R", "before": "MEEG analysis methods,", "after": "noninvasive Electrophysiology methods:", "start_char_pos": 504, "end_char_pos": 526}, {"type": "R", "before": "source", "after": "neural", "start_char_pos": 555, "end_char_pos": 561}, {"type": "R", "before": "\"Leakage\". There has been a quest for the modelling to counterattack Leakage of both activation and connectivity.", "after": "leakage.", "start_char_pos": 595, "end_char_pos": 708}, {"type": "R", "before": ", also related to", "after": ". Somewhat related to the", "start_char_pos": 779, "end_char_pos": 796}, {"type": "R", "before": ":", "after": ",", "start_char_pos": 830, "end_char_pos": 831}, {"type": "R", "before": "System", "after": "Systems", "start_char_pos": 854, "end_char_pos": 860}, {"type": "R", "before": "Formalism", "after": "Information Theory", "start_char_pos": 881, "end_char_pos": 890}, {"type": "R", "before": "system theoretic formalism of MEEG connectivity alongside its Bayesian inference: Hidden Hermitian Gaussian Graphical Model (H-HGGM). This is a source (state) GGM", "after": "new formalism that attains for it, Hidden Gaussian Graphical State-Model (HIGGS). A neural Gaussian Graphical Model (GGM)", "start_char_pos": 908, "end_char_pos": 1070}, {"type": "R", "before": "MEEG observation equation , i. e.", "after": "observation equation of Magneto-encephalographic (MEEG) signals. HIGGS is", "start_char_pos": 1085, "end_char_pos": 1118}, {"type": "R", "before": ". It is unhidden due to the Type II Likelihood approximated representation: Expected Log-Likelihood (ELL). The essential contributions here are a theory for HGGM solvers, and its implementation for the subsequent Maximization step of the H-HGGM ELL. Its efficacy is demonstrated in high resolution EEG simulations and a", "after": "but with sparse connectivity prior. The mathematical contribution here is the theory for high-dimensional and frequency-domain HIGGS solvers. We demonstrate that HIGGS can attenuate the leakage effect in the most critical case: the distortion EEG signal due to head volume conduction heterogeneities. Its application in EEG is illustrated with retrieved connectivity patterns from human", "start_char_pos": 1184, "end_char_pos": 1503}, {"type": "R", "before": "potential.", "after": "Potentials (SSVEP). We provide for the first time confirmatory evidence for noninvasive procedures of neural connectivity: concurrent EEG and Electrocorticography (ECoG) recordings on monkey.", "start_char_pos": 1531, "end_char_pos": 1541}, {"type": "A", "before": null, "after": "are freely available online", "start_char_pos": 1563, "end_char_pos": 1563}, {"type": "D", "before": ", are freely available online", "after": null, "start_char_pos": 1654, "end_char_pos": 1683}], "sents_char_pos": [0, 126, 316, 461, 605, 708, 892, 1041, 1290, 1433, 1541]} {"doc_id": "1810.03140", "revision_depth": "3", "before_revision": "A typical predictive regression employs a multitude of potential regressors with various degrees of persistence while their signal strength in explaining the dependent variable is often low . Variable selection in such context is of great importance. In this paper, we explore the pitfalls and possibilities of LASSO methods in this predictive regression framework with mixed degrees of persistence . In the presence of stationary, unit root and cointegrated predictors, we show that the adaptive LASSO asymptotically breaks cointegrated groups although it cannot wipe out all inactive cointegrating variables . This new finding motivates a simple but novel post-selection adaptive LASSO, which we call the twin adaptive LASSO (TAlasso), to fix variable selection inconsistency. TAlasso's penalty scheme accommodates the system of heterogeneous regressors, and it recovers the well-known oracle property that implies variable selection consistency and optimal rate of convergence for all three types of regressors. On the contrary , conventional LASSO fails to attain coefficient estimation consistency and variable screening in all components simultaneously , since its penalty is imposed according to the marginal behavior of each individual regressor only. We demonstrate the theoretical properties via extensive Monte Carlo simulations. These LASSO-type methods are applied to evaluate short- and long-horizon predictability of S P 500 excess return .", "after_revision": "Explanatory variables in a predictive regression typically exhibit low signal strength and various degrees of persistence . Variable selection in such a context is of great importance. In this paper, we explore the pitfalls and possibilities of the LASSO methods in this predictive regression framework . In the presence of stationary, local unit root, and cointegrated predictors, we show that the adaptive LASSO cannot asymptotically eliminate all cointegrating variables with zero regression coefficients . This new finding motivates a novel post-selection adaptive LASSO, which we call the twin adaptive LASSO (TAlasso), to restore variable selection consistency. Accommodating the system of heterogeneous regressors, TAlasso achieves the well-known oracle property . In contrast , conventional LASSO fails to attain coefficient estimation consistency and variable screening in all components simultaneously . We apply these LASSO methods to evaluate the short- and long-horizon predictability of S \\& P 500 excess returns .", "edit_actions": [{"type": "R", "before": "A typical predictive regression employs a multitude of potential regressors with", "after": "Explanatory variables in a predictive regression typically exhibit low signal strength and", "start_char_pos": 0, "end_char_pos": 80}, {"type": "D", "before": "while their signal strength in explaining the dependent variable is often low", "after": null, "start_char_pos": 112, "end_char_pos": 189}, {"type": "A", "before": null, "after": "a", "start_char_pos": 219, "end_char_pos": 219}, {"type": "A", "before": null, "after": "the", "start_char_pos": 312, "end_char_pos": 312}, {"type": "D", "before": "with mixed degrees of persistence", "after": null, "start_char_pos": 367, "end_char_pos": 400}, {"type": "R", "before": "unit root", "after": "local unit root,", "start_char_pos": 434, "end_char_pos": 443}, {"type": "R", "before": "asymptotically breaks cointegrated groups although it cannot wipe out all inactive cointegrating variables", "after": "cannot asymptotically eliminate all cointegrating variables with zero regression coefficients", "start_char_pos": 505, "end_char_pos": 611}, {"type": "D", "before": "simple but", "after": null, "start_char_pos": 643, "end_char_pos": 653}, {"type": "R", "before": "fix variable selection inconsistency. TAlasso's penalty scheme accommodates", "after": "restore variable selection consistency. Accommodating", "start_char_pos": 743, "end_char_pos": 818}, {"type": "R", "before": "and it recovers", "after": "TAlasso achieves", "start_char_pos": 859, "end_char_pos": 874}, {"type": "R", "before": "that implies variable selection consistency and optimal rate of convergence for all three types of regressors. On the contrary", "after": ". In contrast", "start_char_pos": 906, "end_char_pos": 1032}, {"type": "R", "before": ", since its penalty is imposed according to the marginal behavior of each individual regressor only. We demonstrate the theoretical properties via extensive Monte Carlo simulations. These LASSO-type methods are applied to evaluate", "after": ". We apply these LASSO methods to evaluate the", "start_char_pos": 1161, "end_char_pos": 1391}, {"type": "A", "before": null, "after": "\\&", "start_char_pos": 1436, "end_char_pos": 1436}, {"type": "R", "before": "return", "after": "returns", "start_char_pos": 1450, "end_char_pos": 1456}], "sents_char_pos": [0, 251, 402, 613, 780, 1016, 1261, 1342]} {"doc_id": "1810.07457", "revision_depth": "1", "before_revision": "Vanna-volga is a popular method for interpolation/extrapolation of volatility smiles. The technique is widely used in the FX markets context, due to its ability to consistently construct the entire lognormal smile using only three lognormal market quotes. However, the derivation of vanna-volga method itself is free of distributional assumptions. To this end , it is surprising there have been actually no attempts to apply the method to Normal volatilities which are the current standard for interest rate markets . We show how the method can be modified to build Normal volatility smiles. As it turns out, that requires only minor modifications compared to the lognormal case. Moreover, as the inversion of Normal volatilities from option prices is easier in the Normal case, the smile construction can occur at a machine-precision level using analytical formulae, making the approximations via Taylor-series unnecessary. Apart from being based on practical and intuitive hedging arguments, the vanna-volga has further important advantages. In comparison to the Normal SABR model, the vanna-volga can easily fit both classical convex and atypical concave smiles ( \"frowns\" ). Concave smile patters are sometimes observed around ATM strikes in the interest rate markets, in particular in the situations of anticipated jumps (with unclear outcome) in interest rates. Besides, concavity is often observed towards the lower/left end of the Normal volatility smiles of interest rates. At least in these situations, the vanna-volga can be expected to interpolate/extrapolate better than SABR.", "after_revision": "Vanna-Volga is a popular method for the interpolation/extrapolation of volatility smiles. The technique is widely used in the FX markets context, due to its ability to consistently construct the entire Lognormal smile using only three Lognormal market quotes. However, the derivation of the Vanna-Volga method itself is free of distributional assumptions. With this is mind , it is surprising there have been no attempts to apply the method to Normal volatilities ( the current standard for interest rate markets ) . We show how the method can be modified to build Normal volatility smiles. As it turns out, only minor modifications are required compared to the Lognormal case. Moreover, as the inversion of Normal volatilities from option prices is easier in the Normal case, the smile construction can occur at a machine-precision level using analytical formulae, making the approximations via Taylor-series unnecessary. Apart from being based on practical and intuitive hedging arguments, the Vanna-Volga has further important advantages. In comparison to the Normal SABR model, the Vanna-Volga can easily fit both classical convex and atypical concave smiles ( frowns ). Concave smile patterns are sometimes observed around ATM strikes in the interest rate markets, particularly in the situations of anticipated jumps (with an unclear outcome) in interest rates. Besides, concavity is often observed towards the lower/left end of the Normal volatility smiles of interest rates. At least in these situations, the Vanna-Volga can be expected to interpolate/extrapolate better than SABR.", "edit_actions": [{"type": "R", "before": "Vanna-volga", "after": "Vanna-Volga", "start_char_pos": 0, "end_char_pos": 11}, {"type": "A", "before": null, "after": "the", "start_char_pos": 36, "end_char_pos": 36}, {"type": "R", "before": "lognormal", "after": "Lognormal", "start_char_pos": 199, "end_char_pos": 208}, {"type": "R", "before": "lognormal", "after": "Lognormal", "start_char_pos": 232, "end_char_pos": 241}, {"type": "R", "before": "vanna-volga", "after": "the Vanna-Volga", "start_char_pos": 284, "end_char_pos": 295}, {"type": "R", "before": "To this end", "after": "With this is mind", "start_char_pos": 349, "end_char_pos": 360}, {"type": "D", "before": "actually", "after": null, "start_char_pos": 396, "end_char_pos": 404}, {"type": "R", "before": "which are", "after": "(", "start_char_pos": 460, "end_char_pos": 469}, {"type": "A", "before": null, "after": ")", "start_char_pos": 517, "end_char_pos": 517}, {"type": "D", "before": "that requires", "after": null, "start_char_pos": 611, "end_char_pos": 624}, {"type": "A", "before": null, "after": "are required", "start_char_pos": 650, "end_char_pos": 650}, {"type": "R", "before": "lognormal", "after": "Lognormal", "start_char_pos": 667, "end_char_pos": 676}, {"type": "R", "before": "vanna-volga", "after": "Vanna-Volga", "start_char_pos": 1001, "end_char_pos": 1012}, {"type": "R", "before": "vanna-volga", "after": "Vanna-Volga", "start_char_pos": 1091, "end_char_pos": 1102}, {"type": "R", "before": "\"frowns\"", "after": "frowns", "start_char_pos": 1170, "end_char_pos": 1178}, {"type": "R", "before": "patters", "after": "patterns", "start_char_pos": 1196, "end_char_pos": 1203}, {"type": "R", "before": "in particular in", "after": "particularly in", "start_char_pos": 1276, "end_char_pos": 1292}, {"type": "A", "before": null, "after": "an", "start_char_pos": 1335, "end_char_pos": 1335}, {"type": "R", "before": "vanna-volga", "after": "Vanna-Volga", "start_char_pos": 1521, "end_char_pos": 1532}], "sents_char_pos": [0, 86, 256, 348, 519, 593, 682, 927, 1046, 1181, 1371, 1486]} {"doc_id": "1810.10142", "revision_depth": "2", "before_revision": "Abnormal synchronisation has been associated with epileptic seizures, one of the most common brain disease. Due to this fact, a better understanding of neuronal synchronisation mechanisms can help in the epilepsytreatment. We study neuronal synchronisation by means of a randomly network with excitatory and inhibitory synapses, where the neuron model is given by the adaptive exponential integrate-and-fire . In our network , we verify that the decrease in the influence of inhibition can generate synchronisation from a pattern of desynchronised spikes. The transition from desynchronous spikes to synchronous bursts activities , induced by varying the synaptic coupling, emerges in a hysteretic loop due to a bistable regime where normal (desynchronous) or abnormal ( synchronous) regimes exist. We show that, for parameters within the bistable region , a square current pulse can trigger the abnormalsynchronous regime , a process we claim to reproduce features of an epileptic seizure. We can also suppress it, as well. Suppression can be achieved through a current with small amplitude applying on less than 10 %DIFDELCMD < \\\\%%% %DIF < of neurons. Therefore, our results demonstrate that electrical stimulation not only can trigger synchronous behaviours, but also can be an effective treatment for epileptic seizures induced in a bistable regime.\\end{abstract} ", "after_revision": "Excessively high, neural synchronisation has been associated with epileptic seizures, one of the most common brain diseases worldwide. A better understanding of neural synchronisation mechanisms can thus help control or even treat epilepsy. In this paper, we study neural synchronisation in a random network where nodes are neurons with excitatory and inhibitory synapses, and neural activity for each node is provided by the adaptive exponential integrate-and-fire model. In this framework , we verify that the decrease in the influence of inhibition can generate synchronisation originating from a pattern of desynchronised spikes. The transition from desynchronous spikes to synchronous bursts of activity , induced by varying the synaptic coupling, emerges in a hysteresis loop due to bistability where abnormal (excessively high synchronous) regimes exist. We verify that, for parameters in the bistability regime , a square current pulse can trigger excessively high (abnormal) synchronisation , a process that can reproduce features of epileptic seizures. Then, we show that it is possible to suppress such abnormal synchronisation by applying a small-amplitude external current on less than 10 %DIFDELCMD < \\\\%%% %DIF < of neurons. Therefore, our results demonstrate that electrical stimulation not only can trigger synchronous behaviours, but also can be an effective treatment for epileptic seizures induced in a bistable regime.\\end{abstract} \\% of the neurons in the network. Our results demonstrate that external electrical stimulation not only can trigger synchronous behaviour, but more importantly, it can be used as a means to reduce abnormal synchronisation and thus, control or treat effectively epileptic seizures.", "edit_actions": [{"type": "R", "before": "Abnormal", "after": "Excessively high, neural", "start_char_pos": 0, "end_char_pos": 8}, {"type": "R", "before": "disease. Due to this fact, a", "after": "diseases worldwide. A", "start_char_pos": 99, "end_char_pos": 127}, {"type": "R", "before": "neuronal", "after": "neural", "start_char_pos": 152, "end_char_pos": 160}, {"type": "R", "before": "help in the epilepsytreatment. We study neuronal synchronisation by means of a randomly network", "after": "thus help control or even treat epilepsy. In this paper, we study neural synchronisation in a random network where nodes are neurons", "start_char_pos": 192, "end_char_pos": 287}, {"type": "R", "before": "where the neuron model is given", "after": "and neural activity for each node is provided", "start_char_pos": 329, "end_char_pos": 360}, {"type": "R", "before": ". In our network", "after": "model. In this framework", "start_char_pos": 408, "end_char_pos": 424}, {"type": "A", "before": null, "after": "originating", "start_char_pos": 515, "end_char_pos": 515}, {"type": "R", "before": "activities", "after": "of activity", "start_char_pos": 620, "end_char_pos": 630}, {"type": "R", "before": "hysteretic", "after": "hysteresis", "start_char_pos": 688, "end_char_pos": 698}, {"type": "R", "before": "a bistable regime where normal (desynchronous) or abnormal (", "after": "bistability where abnormal (excessively high", "start_char_pos": 711, "end_char_pos": 771}, {"type": "R", "before": "show", "after": "verify", "start_char_pos": 803, "end_char_pos": 807}, {"type": "R", "before": "within the bistable region", "after": "in the bistability regime", "start_char_pos": 829, "end_char_pos": 855}, {"type": "R", "before": "the abnormalsynchronous regime", "after": "excessively high (abnormal) synchronisation", "start_char_pos": 893, "end_char_pos": 923}, {"type": "R", "before": "we claim to", "after": "that can", "start_char_pos": 936, "end_char_pos": 947}, {"type": "R", "before": "an epileptic seizure. We can also suppress it, as well. Suppression can be achieved through a current with small amplitude applying", "after": "epileptic seizures. Then, we show that it is possible to suppress such abnormal synchronisation by applying a small-amplitude external current", "start_char_pos": 970, "end_char_pos": 1101}, {"type": "A", "before": null, "after": "\\% of the neurons in the network. Our results demonstrate that external electrical stimulation not only can trigger synchronous behaviour, but more importantly, it can be used as a means to reduce abnormal synchronisation and thus, control or treat effectively epileptic seizures.", "start_char_pos": 1370, "end_char_pos": 1370}], "sents_char_pos": [0, 107, 222, 409, 556, 799, 991, 1025, 1155, 1355]} {"doc_id": "1810.10974", "revision_depth": "1", "before_revision": "This paper tackles the problem of learning brain-visual representations for understanding and neural processes behind human visual perception , with a view towards replicating these processes into machines. The core idea is to learn plausible representations through the combined use of human neural activity evoked by natural imagesas a supervision mechanism for deep learning models. To accomplish this, we propose a multimodal approach that uses two different deep encoders, one for images and one for EEGs, trained in a siamese configuration for learning a joint manifold that maximizes a compatibility measure between visual features and brain representation. The learned manifold is then used to perform image classification and saliency detection as well as to shed light on the possible representations generated by the human brain when perceiving the visual world. Performance analysis shows that neural signals can be used to effectively supervise the training of deep learning models, as demonstrated by the achieved performance in both image classification and saliency detection . Furthermore, the learned brain-visual manifold is consistent with cognitive neuroscience literature about visual perception and , most importantly, highlights new associations between brain areas, image patches and computational kernels. In particular, we are able to approximate brain responses to visual stimuli by training an artificial model with image features correlated to neural activity .", "after_revision": "This work presents a novel method of exploring human brain-visual representations , with a view towards replicating these processes in machines. The core idea is to learn plausible computational and biological representations by correlating human neural activity and natural images. Thus, we first propose a model, EEG-ChannelNet, to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration , for learning a joint manifold that maximizes a compatibility measure between visual features and brain representations. We then carry out image classification and saliency detection on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn, can be used to effectively supervise the training of deep learning models, as demonstrated by the high performance of image classification and saliency detection on out-of-training classes. The obtained results show that the learned brain-visual features lead to improved performance and simultaneously bring deep models more in line with cognitive neuroscience work related to visual perception and attention .", "edit_actions": [{"type": "R", "before": "paper tackles the problem of learning", "after": "work presents a novel method of exploring human", "start_char_pos": 5, "end_char_pos": 42}, {"type": "D", "before": "for understanding and neural processes behind human visual perception", "after": null, "start_char_pos": 72, "end_char_pos": 141}, {"type": "R", "before": "into", "after": "in", "start_char_pos": 192, "end_char_pos": 196}, {"type": "R", "before": "representations through the combined use of", "after": "computational and biological representations by correlating", "start_char_pos": 243, "end_char_pos": 286}, {"type": "R", "before": "evoked by natural imagesas a supervision mechanism for deep learning models. To accomplish this, we propose", "after": "and natural images. Thus, we first propose a model, EEG-ChannelNet, to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce", "start_char_pos": 309, "end_char_pos": 416}, {"type": "R", "before": "two different deep encoders, one for images and one for EEGs,", "after": "deep image and EEG encoders,", "start_char_pos": 449, "end_char_pos": 510}, {"type": "A", "before": null, "after": ",", "start_char_pos": 546, "end_char_pos": 546}, {"type": "R", "before": "representation. The learned manifold is then used to perform", "after": "representations. We then carry out", "start_char_pos": 650, "end_char_pos": 710}, {"type": "R", "before": "as well as to shed light on the possible representations generated by the human brain when perceiving the visual world. Performance analysis shows that neural signals", "after": "on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn,", "start_char_pos": 755, "end_char_pos": 921}, {"type": "R", "before": "achieved performance in both", "after": "high performance of", "start_char_pos": 1020, "end_char_pos": 1048}, {"type": "R", "before": ". Furthermore,", "after": "on out-of-training classes. The obtained results show that", "start_char_pos": 1093, "end_char_pos": 1107}, {"type": "R", "before": "manifold is consistent", "after": "features lead to improved performance and simultaneously bring deep models more in line", "start_char_pos": 1133, "end_char_pos": 1155}, {"type": "R", "before": "literature about", "after": "work related to", "start_char_pos": 1184, "end_char_pos": 1200}, {"type": "R", "before": ", most importantly, highlights new associations between brain areas, image patches and computational kernels. In particular, we are able to approximate brain responses to visual stimuli by training an artificial model with image features correlated to neural activity", "after": "attention", "start_char_pos": 1223, "end_char_pos": 1490}], "sents_char_pos": [0, 206, 385, 665, 874, 1094, 1332]} {"doc_id": "1811.01051", "revision_depth": "1", "before_revision": "Melanoma is a type of skin cancer with the most rapidly increasing incidence. Early detection of melanoma using dermoscopy images significantly increases patients' survival rate. However, accurately classifying skin lesions , especially in the early stage , is extremely challenging via dermatologists' observation . Hence, the discovery of reliable biomarkers for melanoma diagnosis will be meaningful . Recent years, deep learning empowered computer-assisted diagnosis has been shown its value in medical imaging-based decision making. However, lots of research focus on improving disease detection accuracy but not exploring the evidence of pathology. In this paper, we propose a method to interpret the deep learning classification findings. Firstly, we propose an accurate neural network architecture to classify skin lesion . Secondly, we utilize a prediction difference analysis method that examining each patch on the image through patch wised corrupting for detecting the biomarkers. Lastly, we validate that our biomarker findings are corresponding to the patterns in the literature. The findings might be significant to guide clinical diagnosis.", "after_revision": "Melanoma is a type of skin cancer with the most rapidly increasing incidence. Early detection of melanoma using dermoscopy images significantly increases patients' survival rate. However, accurately classifying skin lesions by eye , especially in the early stage of melanoma , is extremely challenging for the dermatologists . Hence, the discovery of reliable biomarkers will be meaningful for melanoma diagnosis . Recent years, the value of deep learning empowered computer-assisted diagnose has been shown in biomedical imaging based decision making. However, much research focuses on improving disease detection accuracy but not exploring the evidence of pathology. In this paper, we propose a method to interpret the deep learning classification findings. Firstly, we propose an accurate neural network architecture to classify skin lesions . Secondly, we utilize a prediction difference analysis method that examines each patch on the image through patch-wised corrupting to detect the biomarkers. Lastly, we validate that our biomarker findings are corresponding to the patterns in the literature. The findings can be significant and useful to guide clinical diagnosis.", "edit_actions": [{"type": "A", "before": null, "after": "by eye", "start_char_pos": 224, "end_char_pos": 224}, {"type": "A", "before": null, "after": "of melanoma", "start_char_pos": 257, "end_char_pos": 257}, {"type": "R", "before": "via dermatologists' observation", "after": "for the dermatologists", "start_char_pos": 285, "end_char_pos": 316}, {"type": "D", "before": "for melanoma diagnosis", "after": null, "start_char_pos": 363, "end_char_pos": 385}, {"type": "A", "before": null, "after": "for melanoma diagnosis", "start_char_pos": 405, "end_char_pos": 405}, {"type": "A", "before": null, "after": "the value of", "start_char_pos": 422, "end_char_pos": 422}, {"type": "R", "before": "diagnosis", "after": "diagnose", "start_char_pos": 465, "end_char_pos": 474}, {"type": "R", "before": "its value in medical imaging-based", "after": "in biomedical imaging based", "start_char_pos": 490, "end_char_pos": 524}, {"type": "R", "before": "lots of research focus", "after": "much research focuses", "start_char_pos": 551, "end_char_pos": 573}, {"type": "R", "before": "lesion", "after": "lesions", "start_char_pos": 827, "end_char_pos": 833}, {"type": "R", "before": "examining", "after": "examines", "start_char_pos": 902, "end_char_pos": 911}, {"type": "R", "before": "patch wised corrupting for detecting", "after": "patch-wised corrupting to detect", "start_char_pos": 944, "end_char_pos": 980}, {"type": "R", "before": "might be significant", "after": "can be significant and useful", "start_char_pos": 1111, "end_char_pos": 1131}], "sents_char_pos": [0, 77, 178, 318, 407, 541, 658, 749, 835, 996, 1097]} {"doc_id": "1811.05849", "revision_depth": "2", "before_revision": "Attempting to recognize a tree inside a phylogenetic network is a fundamental undertaking in evolutionary analysis. Therefore, the concept of \" tree-based \" phylogenetic networks, which was introduced by Francis and Steel, has attracted much attention of theoretical biologists in the last few years. In this context, spanning trees of a certain kind called \"subdivision trees\" play an essential role and there are many important computational problems about them, whose time complexity is still unclear. Against this backdrop, the present paperaims to provide a graph theoretical framework for solving different problems on subdivision trees in a simple and unified manner. To this end, we focus on a structure called the maximal zig-zag trail decomposition that is inherent in any rooted binary phylogenetic network N and prove a structure theorem that characterizes the collection of all subdivision trees of N . Our theorem does not only imply and unify various results in the literature but also yield linear time (for enumeration, linear delay) algorithms for the following problems: given a rooted binary phylogenetic network N, 1) determine whether or not N has a subdivision tree and find one if there exists any (decision/search problem) ; 2) compute the number of subdivision trees of N (counting problem); 3 ) list all subdivision trees of N (enumeration problem); and 4 ) find a subdivision tree to maximize or minimize a prescribed objective function (optimization problem). Importantly, the results and algorithms in this paper still hold true for some non-binary phylogenetic networks and this generalization gives a partial answer to an open question from Pons, Semple, and Steel. We also mention some statistical applications and further research directions .", "after_revision": "Attempting to recognize a tree inside a phylogenetic network is a fundamental undertaking in evolutionary analysis. In the last few years, therefore, tree-based phylogenetic networks, which are defined by a spanning tree called a subdivision tree, have attracted attention of theoretical biologists . However, the application of such networks is still not easy, due to many problems whose time complexities are not clearly understood. In this paper, we provide a general framework for solving those various old or new problems from a coherent perspective, rather than analyzing the complexity of each individual problem or developing an algorithm one by one. More precisely, we establish a structure theorem that gives a way to canonically decompose any rooted binary phylogenetic network N into maximal zig-zag trails that are uniquely determined, and use it to characterize the set of subdivision trees of N in the form of a direct product, in a way reminiscent of the structure theorem for finitely generated Abelian groups. From the main results, we derive a series of linear time and linear time delay algorithms for the following problems: given a rooted binary phylogenetic network N, 1) determine whether or not N has a subdivision tree and find one if there exists any ; 2) measure the deviation of N from being tree-based; 3) compute the number of subdivision trees of N ; 4 ) list all subdivision trees of N ; and 5 ) find a subdivision tree to maximize or minimize a prescribed objective function . All algorithms proposed here are optimal in terms of time complexity. Our results do not only imply and unify various known results, but also answer many open questions and moreover enable novel applications, such as the estimation of a maximum likelihood tree underlying a tree-based network. The results and algorithms in this paper still hold true for a special class of rooted non-binary phylogenetic networks .", "edit_actions": [{"type": "R", "before": "Therefore, the concept of \"", "after": "In the last few years, therefore,", "start_char_pos": 116, "end_char_pos": 143}, {"type": "D", "before": "\"", "after": null, "start_char_pos": 155, "end_char_pos": 156}, {"type": "R", "before": "was introduced by Francis and Steel, has attracted much", "after": "are defined by a spanning tree called a subdivision tree, have attracted", "start_char_pos": 186, "end_char_pos": 241}, {"type": "R", "before": "in the last few years. In this context, spanning trees of a certain kind called \"subdivision trees\" play an essential role and there are many important computational problems about them, whose time complexity is still unclear. Against this backdrop, the present paperaims to provide a graph theoretical", "after": ". However, the application of such networks is still not easy, due to many problems whose time complexities are not clearly understood. In this paper, we provide a general", "start_char_pos": 278, "end_char_pos": 580}, {"type": "R", "before": "different problems on subdivision trees in a simple and unified manner. To this end, we focus on a structure called the maximal zig-zag trail decomposition that is inherent in", "after": "those various old or new problems from a coherent perspective, rather than analyzing the complexity of each individual problem or developing an algorithm one by one. More precisely, we establish a structure theorem that gives a way to canonically decompose", "start_char_pos": 603, "end_char_pos": 778}, {"type": "R", "before": "and prove a structure theorem that characterizes the collection of all", "after": "into maximal zig-zag trails that are uniquely determined, and use it to characterize the set of", "start_char_pos": 820, "end_char_pos": 890}, {"type": "R", "before": ". Our theorem does not only imply and unify various results in the literature but also yield linear time (for enumeration, linear delay)", "after": "in the form of a direct product, in a way reminiscent of the structure theorem for finitely generated Abelian groups. From the main results, we derive a series of linear time and linear time delay", "start_char_pos": 914, "end_char_pos": 1050}, {"type": "D", "before": "(decision/search problem)", "after": null, "start_char_pos": 1222, "end_char_pos": 1247}, {"type": "A", "before": null, "after": "measure the deviation of N from being tree-based; 3)", "start_char_pos": 1253, "end_char_pos": 1253}, {"type": "R", "before": "(counting problem); 3", "after": "; 4", "start_char_pos": 1299, "end_char_pos": 1320}, {"type": "R", "before": "(enumeration problem); and 4", "after": "; and 5", "start_char_pos": 1355, "end_char_pos": 1383}, {"type": "R", "before": "(optimization problem). Importantly, the", "after": ". All algorithms proposed here are optimal in terms of time complexity. Our results do not only imply and unify various known results, but also answer many open questions and moreover enable novel applications, such as the estimation of a maximum likelihood tree underlying a tree-based network. The", "start_char_pos": 1466, "end_char_pos": 1506}, {"type": "R", "before": "some", "after": "a special class of rooted", "start_char_pos": 1564, "end_char_pos": 1568}, {"type": "D", "before": "and this generalization gives a partial answer to an open question from Pons, Semple, and Steel. We also mention some statistical applications and further research directions", "after": null, "start_char_pos": 1602, "end_char_pos": 1776}], "sents_char_pos": [0, 115, 300, 504, 674, 1089, 1249, 1318, 1377, 1489, 1698]} {"doc_id": "1811.09881", "revision_depth": "2", "before_revision": "Unit disk graphs are the intersection graphs of unit diameter disks in the Euclidean plane. Recognizing unit disk graph is an important geometric problem, and has many application areas. In general, this problem is shown to be \\existsR-complete. In some applications, the objects that correspond to unit disks, have predefined (geometrical) structures to be placed on. Hence, many scientists attacked this problem by restricting the domain for the centers of the disks . One example to such applications is wireless sensor networks, where each disk corresponds to a wireless sensor node, and a pair of intersecting disks correspond to a pair of sensors being able to communicate with each other . It is usually assumed that the nodes have identical sensing ranges, and thus unit disk graph model is used to model problems concerning wireless sensor networks. In this paper, we also attack the unit disk recognition problem on a restricted domain, by assuming a scenario where the wireless sensor nodes are deployed on the corridors of a building. Based on this scenario, we impose a geometric constraint such that the unit disks must be centered onto given straight lines. We show that deciding whether there exists a realization of a given graph as unit disk graphs on straight lines is NP-hard, even if the given lines are parallel to either x-axis or y-axis. Moreover, we remark that if the straight lines are not given, then the problem becomes \\exists\\mathbb{R", "after_revision": "Unit disk graphs are the intersection graphs of unit radius disks in the Euclidean plane. Deciding whether there exists an embedding of a given unit disk graph , i.e. unit disk graph recognition, is an important geometric problem, and has many application areas. In general, this problem is known to be \\existsR-complete. In some applications, the objects that correspond to unit disks, have predefined (geometrical) structures to be placed on. Hence, many researchers attacked this problem by restricting the domain of the disk centers . One example to such applications is wireless sensor networks, where each disk corresponds to a wireless sensor node, and a pair of intersecting disks corresponds to a pair of sensors being able to communicate with one another . It is usually assumed that the nodes have identical sensing ranges, and thus a unit disk graph model is used to model problems concerning wireless sensor networks. We consider the unit disk graph realization problem on a restricted domain, by assuming a scenario where the wireless sensor nodes are deployed on the corridors of a building. Based on this scenario, we impose a geometric constraint such that the unit disks must be centered onto given straight lines. In this paper, we first describe a polynomial-time reduction which shows that deciding whether a graph can be realized as unit disks onto given straight lines is NP-hard, when the given lines are parallel to either x-axis or y-axis. Using the reduction we described, we also show that this problem is NP-complete when the given lines are only parallel to x-axis (and one another). We obtain those results using the idea of the logic engine introduced by Bhatt and Cosmadakis in 1987.", "edit_actions": [{"type": "R", "before": "diameter", "after": "radius", "start_char_pos": 53, "end_char_pos": 61}, {"type": "R", "before": "Recognizing", "after": "Deciding whether there exists an embedding of a given", "start_char_pos": 92, "end_char_pos": 103}, {"type": "A", "before": null, "after": ", i.e. unit disk graph recognition,", "start_char_pos": 120, "end_char_pos": 120}, {"type": "R", "before": "shown", "after": "known", "start_char_pos": 216, "end_char_pos": 221}, {"type": "R", "before": "scientists", "after": "researchers", "start_char_pos": 382, "end_char_pos": 392}, {"type": "R", "before": "for the centers of the disks", "after": "of the disk centers", "start_char_pos": 441, "end_char_pos": 469}, {"type": "R", "before": "correspond", "after": "corresponds", "start_char_pos": 622, "end_char_pos": 632}, {"type": "R", "before": "each other", "after": "one another", "start_char_pos": 685, "end_char_pos": 695}, {"type": "A", "before": null, "after": "a", "start_char_pos": 775, "end_char_pos": 775}, {"type": "R", "before": "In this paper, we also attack", "after": "We consider", "start_char_pos": 861, "end_char_pos": 890}, {"type": "R", "before": "recognition", "after": "graph realization", "start_char_pos": 905, "end_char_pos": 916}, {"type": "R", "before": "We show", "after": "In this paper, we first describe a polynomial-time reduction which shows", "start_char_pos": 1175, "end_char_pos": 1182}, {"type": "R", "before": "there exists a realization of a given graph as unit disk graphs on", "after": "a graph can be realized as unit disks onto given", "start_char_pos": 1205, "end_char_pos": 1271}, {"type": "R", "before": "even if", "after": "when", "start_char_pos": 1299, "end_char_pos": 1306}, {"type": "R", "before": "Moreover, we remark that if the straight lines are not given, then the problem becomes \\exists\\mathbb{R", "after": "Using the reduction we described, we also show that this problem is NP-complete when the given lines are only parallel to x-axis (and one another). We obtain those results using the idea of the logic engine introduced by Bhatt and Cosmadakis in 1987.", "start_char_pos": 1364, "end_char_pos": 1467}], "sents_char_pos": [0, 91, 187, 246, 369, 471, 697, 860, 1048, 1174, 1363]} {"doc_id": "1811.12355", "revision_depth": "2", "before_revision": "New algorithms for exploiting amino acid covariation have had a large impact on de novo protein modelling. The limited applicability of these methods to smaller protein families has limited their use for structural annotation of whole genomes , however. Most recently , deep learning techniques have shown promise in allowing residue-residue contacts to be predicted accurately even for shallow sequence alignments. Here we introduce DMPfold, a development of the DeepMetaPSICOV contact predictor, which uses deep learning throughout to predict inter-atomic distance bounds, the main chain hydrogen bond network, and main chain torsion anglesand uses these to build models in an iterative procedure . DMPfold produces more accurate models than two popular methods for a test set of CASP12 free modelling domains, and is able to generate high quality models without any modifications for a set of transmembrane proteins. We apply it to all Pfam domains without a known structure and provide high confidence models for 25\\% of these so-called \"dark \" families , a calculation that took less than a week on a small cluster with 200 available cores . DMPfold provides models for 16\\% of human proteome UniProt entries without structures, can generate accurate models with alignments of fewer than 100 sequences in some cases, and is freely available.", "after_revision": "The inapplicability of amino acid covariation methods to small protein families has limited their use for structural annotation of whole genomes . Recently , deep learning has shown promise in allowing accurate residue-residue contact prediction even for shallow sequence alignments. Here we introduce DMPfold, which uses deep learning to predict inter-atomic distance bounds, the main chain hydrogen bond network, and torsion angles, which it uses to build models in an iterative fashion . DMPfold produces more accurate models than two popular methods for a test set of CASP12 domains, and works just as well for transmembrane proteins. Applied to all Pfam domains without known structures, confident models for 25\\% of these so-called dark families were produced in under a week on a small 200 core cluster . DMPfold provides models for 16\\% of human proteome UniProt entries without structures, generates accurate models with fewer than 100 sequences in some cases, and is freely available.", "edit_actions": [{"type": "R", "before": "New algorithms for exploiting", "after": "The inapplicability of", "start_char_pos": 0, "end_char_pos": 29}, {"type": "R", "before": "have had a large impact on de novo protein modelling. The limited applicability of these methods to smaller", "after": "methods to small", "start_char_pos": 53, "end_char_pos": 160}, {"type": "R", "before": ", however. Most recently", "after": ". Recently", "start_char_pos": 243, "end_char_pos": 267}, {"type": "R", "before": "techniques have", "after": "has", "start_char_pos": 284, "end_char_pos": 299}, {"type": "A", "before": null, "after": "accurate", "start_char_pos": 326, "end_char_pos": 326}, {"type": "R", "before": "contacts to be predicted accurately", "after": "contact prediction", "start_char_pos": 343, "end_char_pos": 378}, {"type": "D", "before": "a development of the DeepMetaPSICOV contact predictor,", "after": null, "start_char_pos": 444, "end_char_pos": 498}, {"type": "D", "before": "throughout", "after": null, "start_char_pos": 524, "end_char_pos": 534}, {"type": "R", "before": "main chain torsion anglesand uses these", "after": "torsion angles, which it uses", "start_char_pos": 618, "end_char_pos": 657}, {"type": "R", "before": "procedure", "after": "fashion", "start_char_pos": 690, "end_char_pos": 699}, {"type": "D", "before": "free modelling", "after": null, "start_char_pos": 790, "end_char_pos": 804}, {"type": "R", "before": "is able to generate high quality models without any modifications for a set of", "after": "works just as well for", "start_char_pos": 818, "end_char_pos": 896}, {"type": "R", "before": "We apply it", "after": "Applied", "start_char_pos": 921, "end_char_pos": 932}, {"type": "R", "before": "a known structure and provide high confidence", "after": "known structures, confident", "start_char_pos": 961, "end_char_pos": 1006}, {"type": "R", "before": "\"dark \" families , a calculation that took less than a", "after": "dark families were produced in under a", "start_char_pos": 1042, "end_char_pos": 1096}, {"type": "D", "before": "cluster with", "after": null, "start_char_pos": 1113, "end_char_pos": 1125}, {"type": "R", "before": "available cores", "after": "core cluster", "start_char_pos": 1130, "end_char_pos": 1145}, {"type": "R", "before": "can generate", "after": "generates", "start_char_pos": 1235, "end_char_pos": 1247}, {"type": "D", "before": "alignments of", "after": null, "start_char_pos": 1269, "end_char_pos": 1282}], "sents_char_pos": [0, 106, 253, 416, 443, 920]} {"doc_id": "1812.06537", "revision_depth": "1", "before_revision": "This paper explores the use of fuzzy regression-discontinuity design in the context where multiple treatments are applied at the threshold. The identification result shows that, under a very strong assumption of equality of treatment probability changes at the cutoff point, a difference in fuzzy discontinuity identify a treatment effect of interest. Using the data from the National Health Interview Survey (NHIS) , we apply this identification strategy to evaluate the causal effect of the Affordable Care Act (ACA) on health care access and utilization of old Americans. We find results suggesting that the implementation of the Affordable Care Act has led to an increase in the hospitalization rate of elderly American--5\\% more hospitalization. It has caused a minor increase of cost-related direct barrier to access to care--3.6\\% increase in the probability of delaying care for cost reasons . The ACA has also exacerbated cost-related barriers to follow-up and continuity care -- 7\\% more elderly couldn't afford prescriptions, 7\\% more couldn't see a specialist and, 5.5\\% more couldn't afford a follow-up visit -- as result of ACA", "after_revision": "This paper explores the use of a fuzzy regression discontinuity design where multiple treatments are applied at the threshold. The identification results show that, under the very strong assumption that the change in the probability of treatment at the cutoff is equal across treatments, a difference-in-discontinuities estimator identifies the treatment effect of interest. The point estimates of the treatment effect using a simple fuzzy difference-in-discontinuities design are biased if the change in the probability of a treatment applying at the cutoff differs across treatments. Modifications of the fuzzy difference-in-discontinuities approach that rely on milder assumptions are also proposed. Our results suggest caution is needed when applying before-and-after methods in the presence of fuzzy discontinuities. Using data from the National Health Interview Survey , we apply this new identification strategy to evaluate the causal effect of the Affordable Care Act (ACA) on older Americans' health care access and utilization . Our results suggest that the ACA has (1) led to a 5\\% increase in the hospitalization rate of elderly Americans, (2) increased the probability of delaying care for cost reasons by 3.6\\%, and (3) exacerbated cost-related barriers to follow-up care and continuity of care: 7\\% more elderly individuals could not afford prescriptions, 7\\% more could not see a specialist , and 5.5\\% more could not afford a follow-up visit . Our results can be explained by an increase in the demand for health services without a corresponding adjustment in supply following the implementation of the ACA.", "edit_actions": [{"type": "R", "before": "fuzzy regression-discontinuity design in the context", "after": "a fuzzy regression discontinuity design", "start_char_pos": 31, "end_char_pos": 83}, {"type": "R", "before": "result shows", "after": "results show", "start_char_pos": 159, "end_char_pos": 171}, {"type": "R", "before": "a", "after": "the", "start_char_pos": 184, "end_char_pos": 185}, {"type": "R", "before": "of equality of treatment probability changes", "after": "that the change in the probability of treatment", "start_char_pos": 209, "end_char_pos": 253}, {"type": "R", "before": "point, a difference in fuzzy discontinuity identify a", "after": "is equal across treatments, a difference-in-discontinuities estimator identifies the", "start_char_pos": 268, "end_char_pos": 321}, {"type": "R", "before": "Using the", "after": "The point estimates of the treatment effect using a simple fuzzy difference-in-discontinuities design are biased if the change in the probability of a treatment applying at the cutoff differs across treatments. Modifications of the fuzzy difference-in-discontinuities approach that rely on milder assumptions are also proposed. Our results suggest caution is needed when applying before-and-after methods in the presence of fuzzy discontinuities. Using", "start_char_pos": 352, "end_char_pos": 361}, {"type": "D", "before": "(NHIS)", "after": null, "start_char_pos": 409, "end_char_pos": 415}, {"type": "A", "before": null, "after": "new", "start_char_pos": 432, "end_char_pos": 432}, {"type": "A", "before": null, "after": "older Americans'", "start_char_pos": 523, "end_char_pos": 523}, {"type": "R", "before": "of old Americans. We find results suggesting that the implementation of the Affordable Care Act has led to an", "after": ". Our results suggest that the ACA has (1) led to a 5\\%", "start_char_pos": 559, "end_char_pos": 668}, {"type": "R", "before": "American--5\\% more hospitalization. It has caused a minor increase of cost-related direct barrier to access to care--3.6\\% increase in", "after": "Americans, (2) increased", "start_char_pos": 717, "end_char_pos": 851}, {"type": "R", "before": ". The ACA has also", "after": "by 3.6\\%, and (3)", "start_char_pos": 902, "end_char_pos": 920}, {"type": "R", "before": "and continuity care --", "after": "care and continuity of care:", "start_char_pos": 968, "end_char_pos": 990}, {"type": "R", "before": "couldn't", "after": "individuals could not", "start_char_pos": 1008, "end_char_pos": 1016}, {"type": "R", "before": "couldn't", "after": "could not", "start_char_pos": 1048, "end_char_pos": 1056}, {"type": "R", "before": "and,", "after": ", and", "start_char_pos": 1074, "end_char_pos": 1078}, {"type": "R", "before": "couldn't", "after": "could not", "start_char_pos": 1090, "end_char_pos": 1098}, {"type": "R", "before": "-- as result of ACA", "after": ". Our results can be explained by an increase in the demand for health services without a corresponding adjustment in supply following the implementation of the ACA.", "start_char_pos": 1124, "end_char_pos": 1143}], "sents_char_pos": [0, 139, 351, 576, 752, 903]} {"doc_id": "1812.06562", "revision_depth": "1", "before_revision": "Detecting epileptic seizure through analysis of the electroencephalography (EEG) signal becomes a standard method for the diagnosis of epilepsy. In a manual way, monitoring of long term EEG is tedious and error prone. Therefore, a reliable automatic seizure detection method is desirable. A critical challenge to automatic seizuredetection is that seizure morphologies exhibit considerable variabilities. In order to capture essential seizure patterns, this paper leverages an attention mechanism and a bidirectional long short-term memory (BiLSTM) model to exploit both spatially and temporally discriminating features and account for seizure variabilities. The attention mechanism is to capture spatial features more effectively according to the contributions of brain areas to seizures. The BiLSTM model is to extract more discriminating temporal features in the forward and the backward directions. By accounting for both spatial and temporal variations of seizures, the proposed method is more robust across subjects. The testing results over the noisy real data of CHB-MIT show that the proposed method outperforms the current state-of-the-art methods. In both mixing-patients and cross-patient experiments, the average sensitivity and specificity are both higher while their corresponding standard deviations are lower than the methods in comparison .", "after_revision": "Identifying epileptic seizures through analysis of the electroencephalography (EEG) signal becomes a standard method for the diagnosis of epilepsy. Manual seizure identification on EEG by trained neurologists is time-consuming, labor-intensive and error-prone, and a reliable automatic seizure /non-seizure classification method is needed. One of the challenges in automatic seizure/non-seizure classification is that seizure morphologies exhibit considerable variabilities. In order to capture essential seizure patterns, this paper leverages an attention mechanism and a bidirectional long short-term memory (BiLSTM) to exploit both spatial and temporal discriminating features and overcome seizure variabilities. The attention mechanism is to capture spatial features according to the contributions of different brain regions to seizures. The BiLSTM is to extract discriminating temporal features in the forward and the backward directions. Cross-validation experiments and cross-patient experiments over the noisy data of CHB-MIT are performed to evaluate our proposed approach. The obtained average sensitivity of 87.00\\%, specificity of 88.60\\% and precision of 88.63\\% in cross-validation experiments are higher than using the current state-of-the-art methods, and the standard deviations of our approach are lower. The evaluation results of cross-patient experiments indicate that, our approach has better performance compared with the current state-of-the-art methods and is more robust across patients .", "edit_actions": [{"type": "R", "before": "Detecting epileptic seizure", "after": "Identifying epileptic seizures", "start_char_pos": 0, "end_char_pos": 27}, {"type": "R", "before": "In a manual way, monitoring of long term EEG is tedious and error prone. Therefore,", "after": "Manual seizure identification on EEG by trained neurologists is time-consuming, labor-intensive and error-prone, and", "start_char_pos": 145, "end_char_pos": 228}, {"type": "R", "before": "detection method is desirable. A critical challenge to automatic seizuredetection", "after": "/non-seizure classification method is needed. One of the challenges in automatic seizure/non-seizure classification", "start_char_pos": 258, "end_char_pos": 339}, {"type": "D", "before": "model", "after": null, "start_char_pos": 549, "end_char_pos": 554}, {"type": "R", "before": "spatially and temporally", "after": "spatial and temporal", "start_char_pos": 571, "end_char_pos": 595}, {"type": "R", "before": "account for", "after": "overcome", "start_char_pos": 624, "end_char_pos": 635}, {"type": "D", "before": "more effectively", "after": null, "start_char_pos": 714, "end_char_pos": 730}, {"type": "R", "before": "brain areas", "after": "different brain regions", "start_char_pos": 765, "end_char_pos": 776}, {"type": "D", "before": "model", "after": null, "start_char_pos": 801, "end_char_pos": 806}, {"type": "D", "before": "more", "after": null, "start_char_pos": 821, "end_char_pos": 825}, {"type": "R", "before": "By accounting for both spatial and temporal variations of seizures, the proposed method is more robust across subjects. The testing results", "after": "Cross-validation experiments and cross-patient experiments", "start_char_pos": 903, "end_char_pos": 1042}, {"type": "D", "before": "real", "after": null, "start_char_pos": 1058, "end_char_pos": 1062}, {"type": "R", "before": "show that the proposed method outperforms the current", "after": "are performed to evaluate our proposed approach. The obtained average sensitivity of 87.00\\%, specificity of 88.60\\% and precision of 88.63\\% in cross-validation experiments are higher than using the current", "start_char_pos": 1079, "end_char_pos": 1132}, {"type": "R", "before": "methods. In both mixing-patients and", "after": "methods, and the standard deviations of our approach are lower. The evaluation results of", "start_char_pos": 1150, "end_char_pos": 1186}, {"type": "R", "before": "experiments, the average sensitivity and specificity are both higher while their corresponding standard deviations are lower than the methods in comparison", "after": "experiments indicate that, our approach has better performance compared with the current state-of-the-art methods and is more robust across patients", "start_char_pos": 1201, "end_char_pos": 1356}], "sents_char_pos": [0, 144, 217, 288, 404, 658, 789, 902, 1022, 1158]} {"doc_id": "1812.11183", "revision_depth": "1", "before_revision": "Diffusion MRI is the modality of choice to study alterations of white matter. In the past years, various works have used diffusion MRI for automatic classification of Alzheimers disease . However, the performances obtained with different approaches are difficult to compare because of variations in components such as input data, participant selection, image preprocessing, feature extraction, feature selection (FS) and cross-validation (CV) procedure. Moreover, these studies are also difficult to reproduce because these different components are not readily available. In a previous work (Samper-Gonzalez et al. 2018), we proposed an open-source framework for the reproducible evaluation of AD classification from T1-weighted (T1w) MRI and PET data. In the present paper, we extend this framework to diffusion MRI data . The framework comprises: tools to automatically convert ADNI data into the BIDS standard , pipelines for image preprocessing and feature extraction , baseline classifiers and a rigorous CV procedure. We demonstrate the use of the framework through assessing the influence of diffusion tensor imaging (DTI) metrics (fractional anisotropy - FA, mean diffusivity - MD), feature types, imaging modalities (diffusion MRI or T1w MRI) , data imbalance and FS bias. First , voxel-wise features generally gave better performances than regional features. Secondly, FAand MD provided comparable results for voxel-wise features. Thirdly, T1w MRI performed better than diffusion MRI. Fourthly, we demonstrated that using non-nested validation of FS leads to unreliable and over-optimistic results . All the code is publicly available: general-purpose tools have been integrated into the Clinica software (www.clinica.run) and the paper-specific code is available at: URL", "after_revision": "Diffusion MRI is the modality of choice to study alterations of white matter. In past years, various works have used diffusion MRI for automatic classification of AD . However, classification performance obtained with different approaches is difficult to compare and these studies are also difficult to reproduce . In the present paper, we first extend a previously proposed framework to diffusion MRI data for AD classification. Specifically, we add: conversion of diffusion MRI ADNI data into the BIDS standard and pipelines for diffusion MRI preprocessing and feature extraction . We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly , voxel-wise features generally gives better performance than regional features. Fractional anisotropy (FA) and mean diffusivity (MD) provided comparable results for voxel-wise features. Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate that using non-nested validation of FS leads to unreliable and over-optimistic results : 0.05 up to 0.40 relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI. All the code of the framework and the experiments are publicly available: general-purpose tools have been integrated into the Clinica software package (www.clinica.run) and the paper-specific code is available at: URL", "edit_actions": [{"type": "D", "before": "the", "after": null, "start_char_pos": 81, "end_char_pos": 84}, {"type": "R", "before": "Alzheimers disease", "after": "AD", "start_char_pos": 167, "end_char_pos": 185}, {"type": "R", "before": "the performances", "after": "classification performance", "start_char_pos": 197, "end_char_pos": 213}, {"type": "R", "before": "are", "after": "is", "start_char_pos": 249, "end_char_pos": 252}, {"type": "R", "before": "because of variations in components such as input data, participant selection, image preprocessing, feature extraction, feature selection (FS) and cross-validation (CV) procedure. Moreover,", "after": "and", "start_char_pos": 274, "end_char_pos": 463}, {"type": "R", "before": "because these different components are not readily available. In a previous work (Samper-Gonzalez et al. 2018), we proposed an open-source framework for the reproducible evaluation of AD classification from T1-weighted (T1w) MRI and PET data. In the", "after": ". In the", "start_char_pos": 510, "end_char_pos": 759}, {"type": "R", "before": "extend this", "after": "first extend a previously proposed", "start_char_pos": 778, "end_char_pos": 789}, {"type": "R", "before": ". The framework comprises: tools to automatically convert", "after": "for AD classification. Specifically, we add: conversion of diffusion MRI", "start_char_pos": 822, "end_char_pos": 879}, {"type": "R", "before": ", pipelines for image", "after": "and pipelines for diffusion MRI", "start_char_pos": 913, "end_char_pos": 934}, {"type": "R", "before": ", baseline classifiers and a rigorous CV procedure. We demonstrate the use of the framework through assessing the influence of diffusion tensor imaging (DTI) metrics (fractional anisotropy - FA, mean diffusivity - MD), feature types, imaging modalities (diffusion MRI or T1w MRI) , data imbalance and FS bias. First", "after": ". We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly", "start_char_pos": 972, "end_char_pos": 1287}, {"type": "R", "before": "gave better performances", "after": "gives better performance", "start_char_pos": 1320, "end_char_pos": 1344}, {"type": "R", "before": "Secondly, FAand MD", "after": "Fractional anisotropy (FA) and mean diffusivity (MD)", "start_char_pos": 1369, "end_char_pos": 1387}, {"type": "R", "before": "Thirdly, T1w MRI performed better than diffusion MRI. Fourthly, we demonstrated", "after": "Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate", "start_char_pos": 1441, "end_char_pos": 1520}, {"type": "R", "before": ".", "after": ": 0.05 up to 0.40 relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI.", "start_char_pos": 1608, "end_char_pos": 1609}, {"type": "R", "before": "is", "after": "of the framework and the experiments are", "start_char_pos": 1623, "end_char_pos": 1625}, {"type": "A", "before": null, "after": "package", "start_char_pos": 1715, "end_char_pos": 1715}], "sents_char_pos": [0, 77, 187, 453, 571, 752, 823, 1023, 1281, 1368, 1440, 1494, 1609]} {"doc_id": "1901.02314", "revision_depth": "1", "before_revision": "Recent work shows that people are not solely motivated by the economic consequences of the available actions, but they also have moral preferences for ` doing the right thing ' , independently of its economic consequences. Here we add to this literature with two experiments. In Study 1 (N=567) we implement an extreme dictator game in which dictators either get \\0.50 and another person gets nothing, or the other way around (i.e., the other person gets \\0.50 and the dictator gets nothing). We experimentally manipulate the words describing the available actions using six words, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. Our hypothesis is that people are reluctant to make actions described using words with a negative connotation, and are eager to make actions described using words with a positive connotation, independently of their economic consequences. As predicted, we find that the connotation of the word has a U-shaped effect on pro-sociality . Moreover, we show that the overall pattern of results, the U-shape, but not its details, can be explained using a technique from Computational Linguistics known as Sentiment Analysis . In Study 2 (N=413, pre-registered) we make a step forward and we collect the self-reported moral judgment and feeling associated to each of the six words used in Study 1. We show that the rate of pro-sociality in Study 1 can be predicted from the moral judgments and the feelings in Study 2 via Krupka \\& Weber's utility function. In sum, our findings provide additional evidence for the existence and relevance of moral preferences, confirm the usefulness of Krupka \\& Weber's utility function, and suggest that building bridges from computational linguistics to behavioral science can contribute to our understanding of human decision making .", "after_revision": "Recent work suggests that people are not solely motivated by the economic consequences of the available actions, but they also have moral preferences for \" doing the right thing \" , independently of its economic consequences. However, it remains unclear in what situations moral preferences can be activated by simply changing the framing of a decision problem. For example, earlier work proposed that moral preferences can account for framing effects in the Dictator game. However, more recent work has casted doubts on the very existence of framing effects in this game. Here we shed light on this point with two experiments. In Study 1 (N=567) we implement an extreme Dictator game in which dictators either get \\0.50 and another person gets nothing, or the opposite (i.e., the other person gets \\0.50 and the dictator gets nothing). We experimentally manipulate the words describing the available actions using six words, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. As we predicted, we find that the rate of pro-sociality is affected by the words used to describe the available actions . In Study 2 (N=413, pre-registered) we collect the self-reported moral judgment and feeling associated to each of the six words used in Study 1. We show that these moral judgments and feelings can explain the framing effects in Study 1. In sum, we find that changing only one word in the instructions of an extreme Dictator game can alter people's behavior, and that behavioral changes can be explained using moral judgments and feelings associated to these words .", "edit_actions": [{"type": "R", "before": "shows", "after": "suggests", "start_char_pos": 12, "end_char_pos": 17}, {"type": "R", "before": "`", "after": "\"", "start_char_pos": 151, "end_char_pos": 152}, {"type": "R", "before": "'", "after": "\"", "start_char_pos": 175, "end_char_pos": 176}, {"type": "R", "before": "Here we add to this literature", "after": "However, it remains unclear in what situations moral preferences can be activated by simply changing the framing of a decision problem. For example, earlier work proposed that moral preferences can account for framing effects in the Dictator game. However, more recent work has casted doubts on the very existence of framing effects in this game. Here we shed light on this point", "start_char_pos": 223, "end_char_pos": 253}, {"type": "R", "before": "dictator", "after": "Dictator", "start_char_pos": 319, "end_char_pos": 327}, {"type": "R", "before": "other way around", "after": "opposite", "start_char_pos": 409, "end_char_pos": 425}, {"type": "R", "before": "Our hypothesis is that people are reluctant to make actions described using words with a negative connotation, and are eager to make actions described using words with a positive connotation, independently of their economic consequences. As", "after": "As we", "start_char_pos": 666, "end_char_pos": 906}, {"type": "R", "before": "connotation of the word has a U-shaped effect on", "after": "rate of", "start_char_pos": 935, "end_char_pos": 983}, {"type": "R", "before": ". Moreover, we show that the overall pattern of results, the U-shape, but not its details, can be explained using a technique from Computational Linguistics known as Sentiment Analysis", "after": "is affected by the words used to describe the available actions", "start_char_pos": 998, "end_char_pos": 1182}, {"type": "D", "before": "make a step forward and we", "after": null, "start_char_pos": 1223, "end_char_pos": 1249}, {"type": "R", "before": "the rate of pro-sociality in Study 1 can be predicted from the", "after": "these", "start_char_pos": 1369, "end_char_pos": 1431}, {"type": "R", "before": "the feelings in Study 2 via Krupka \\& Weber's utility function.", "after": "feelings can explain the framing effects in Study 1.", "start_char_pos": 1452, "end_char_pos": 1515}, {"type": "R", "before": "our findings provide additional evidence for the existence and relevance of moral preferences, confirm the usefulness of Krupka \\& Weber's utility function, and suggest that building bridges from computational linguistics to behavioral science can contribute to our understanding of human decision making", "after": "we find that changing only one word in the instructions of an extreme Dictator game can alter people's behavior, and that behavioral changes can be explained using moral judgments and feelings associated to these words", "start_char_pos": 1524, "end_char_pos": 1828}], "sents_char_pos": [0, 222, 275, 492, 665, 903, 999, 1184, 1355, 1515]} {"doc_id": "1901.03319", "revision_depth": "1", "before_revision": "Real datasets often come in the form of an URLanised cloud of points . An overall shape of such data is hard to visualise when data points are given by coordinates in a high-dimensional Euclidean space. More general data can be represented by an abstract graph with weights or distances on links between data points. The skeletonisation problem for a point cloud is to find a graph or a skeleton that correctly represents the shape of a cloud. This paper compares three algorithms that solve the data skeletonisation problem for a general cloud with topological and geometric guarantees. First, the Mapper algorithm outputs a network of interlinked clusters. Second, the \\alpha-Reeb algorithm discretises the classical Reeb graphand can be applied to discrete clouds at different scales \\alpha. The third algorithm HoPeS produces a Homologically Persistent Skeleton without any extra parameters. HoPeS represents the 1-dimensional shape of a cloud at any scale by the Optimality Theorem. The Reconstruction Theorem gives conditions on a noisy sample of a graph when HoPeS provides a reconstructed graph with a correct homotopy type and within a small offset of the sample. The experiments on synthetic and real data show that HoPeS outperforms the other algorithms on several topological and geometric measures .", "after_revision": "Data Science aims to extract meaningful knowledge from URLanised data. Real datasets usually come in the form of a cloud of points with only pairwise distances. Numerous applications require to visualise an overall shape of a noisy cloud of points sampled from a non-linear object that is more complicated than a union of disjoint clusters. The skeletonisation problem in its hardest form is to find a 1-dimensional skeleton that correctly represents a shape of the cloud. This paper compares several algorithms that solve the above skeletonisation problem for any point cloud and guarantee a successful reconstruction. For example, given a highly noisy point sample of an unknown underlying graph, a reconstructed skeleton should be geometrically close and homotopy equivalent to (has the same number of independent cycles as) the underlying graph. One of these algorithm produces a Homologically Persistent Skeleton (HoPeS) for any cloud without extra parameters. This universal skeleton contains sub-graphs that provably represent the 1-dimensional shape of the cloud at any scale . Other subgraphs of HoPeS reconstruct an unknown graph from its noisy point sample with a correct homotopy type and within a small offset of the sample. The extensive experiments on synthetic and real data reveal for the first time the maximum level of noise that allows successful graph reconstructions .", "edit_actions": [{"type": "R", "before": "Real datasets often", "after": "Data Science aims to extract meaningful knowledge from URLanised data. Real datasets usually", "start_char_pos": 0, "end_char_pos": 19}, {"type": "R", "before": "an URLanised", "after": "a", "start_char_pos": 40, "end_char_pos": 52}, {"type": "R", "before": ". An", "after": "with only pairwise distances. Numerous applications require to visualise an", "start_char_pos": 69, "end_char_pos": 73}, {"type": "R", "before": "such data is hard to visualise when data points are given by coordinates in a high-dimensional Euclidean space. More general data can be represented by an abstract graph with weights or distances on links between data points.", "after": "a noisy cloud of points sampled from a non-linear object that is more complicated than a union of disjoint clusters.", "start_char_pos": 91, "end_char_pos": 316}, {"type": "R", "before": "for a point cloud", "after": "in its hardest form", "start_char_pos": 345, "end_char_pos": 362}, {"type": "R", "before": "graph or a", "after": "1-dimensional", "start_char_pos": 376, "end_char_pos": 386}, {"type": "R", "before": "the shape of a", "after": "a shape of the", "start_char_pos": 422, "end_char_pos": 436}, {"type": "R", "before": "three", "after": "several", "start_char_pos": 464, "end_char_pos": 469}, {"type": "R", "before": "data", "after": "above", "start_char_pos": 496, "end_char_pos": 500}, {"type": "R", "before": "a general cloud with topological and geometric guarantees. First, the Mapper algorithm outputs a network of interlinked clusters. Second, the \\alpha-Reeb algorithm discretises the classical Reeb graphand can be applied to discrete clouds at different scales \\alpha. The third algorithm HoPeS", "after": "any point cloud and guarantee a successful reconstruction. For example, given a highly noisy point sample of an unknown underlying graph, a reconstructed skeleton should be geometrically close and homotopy equivalent to (has the same number of independent cycles as) the underlying graph. One of these algorithm", "start_char_pos": 529, "end_char_pos": 820}, {"type": "R", "before": "without any", "after": "(HoPeS) for any cloud without", "start_char_pos": 866, "end_char_pos": 877}, {"type": "R", "before": "HoPeS represents", "after": "This universal skeleton contains sub-graphs that provably represent", "start_char_pos": 896, "end_char_pos": 912}, {"type": "R", "before": "a", "after": "the", "start_char_pos": 940, "end_char_pos": 941}, {"type": "R", "before": "by the Optimality Theorem. The Reconstruction Theorem gives conditions on a noisy sample of a graph when HoPeS provides a reconstructed graph", "after": ". Other subgraphs of HoPeS reconstruct an unknown graph from its noisy point sample", "start_char_pos": 961, "end_char_pos": 1102}, {"type": "A", "before": null, "after": "extensive", "start_char_pos": 1177, "end_char_pos": 1177}, {"type": "R", "before": "show that HoPeS outperforms the other algorithms on several topological and geometric measures", "after": "reveal for the first time the maximum level of noise that allows successful graph reconstructions", "start_char_pos": 1217, "end_char_pos": 1311}], "sents_char_pos": [0, 70, 202, 316, 443, 587, 658, 794, 895, 987, 1172]} {"doc_id": "1901.06006", "revision_depth": "1", "before_revision": "We propose a new deep learning algorithm for multiple microtubule (MT) segmentation in time-lapse images using the recurrent attention. Segmentation results from each pair of succeeding frames are being fed into a Hungarian algorithm to assign correspondences among MTs to generate a distinct path through the frames. Based on the obtained trajectories, we calculate MT velocities. Results of this work is expected to help biologists to characterize MT behaviors as well as their potential interactions. To validate our technique, we first use the statistics derived from the real time-lapse series of MT gliding assays to produce a large set of simulated data. We employ this dataset to train our network and optimize its hyperparameters. Then, we utilize the trained model to initialize the network while learning about the real data. Our experimental results show that the proposed algorithm improves the precision for MT instance velocity estimation to 71.3\\% from the baseline result (29.3\\%). We also demonstrate how the injection of temporal information into our network can reduce the false negative rates from 67.8\\% (baseline) down to 28.7\\% (proposed) .", "after_revision": "We propose a new method of instance-level microtubule (MT) tracking in time-lapse image series using recurrent attention. Our novel deep learning algorithm segments individual MTs at each frame. Segmentation results from successive frames are used to assign correspondences among MTs . This ultimately generates a distinct path trajectory for each MT through the frames. Based on these trajectories, we estimate MT velocities. To validate our proposed technique, we conduct experiments using real and simulated data. We use statistics derived from real time-lapse series of MT gliding assays to simulate realistic MT time-lapse image series in our simulated data. This dataset is employed as pre-training and hyperparameter optimization for our network before training on the real data. Our experimental results show that the proposed supervised learning algorithm improves the precision for MT instance velocity estimation drastically to 71.3\\% from the baseline result (29.3\\%). We also demonstrate how the inclusion of temporal information into our deep network can reduce the false negative rates from 67.8\\% (baseline) down to 28.7\\% (proposed) . Our findings in this work are expected to help biologists characterize the spatial arrangement of MTs, specifically the effects of MT-MT interactions .", "edit_actions": [{"type": "R", "before": "deep learning algorithm for multiple", "after": "method of instance-level", "start_char_pos": 17, "end_char_pos": 53}, {"type": "R", "before": "segmentation", "after": "tracking", "start_char_pos": 71, "end_char_pos": 83}, {"type": "R", "before": "images using the", "after": "image series using", "start_char_pos": 98, "end_char_pos": 114}, {"type": "A", "before": null, "after": "Our novel deep learning algorithm segments individual MTs at each frame.", "start_char_pos": 136, "end_char_pos": 136}, {"type": "R", "before": "each pair of succeeding frames are being fed into a Hungarian algorithm", "after": "successive frames are used", "start_char_pos": 163, "end_char_pos": 234}, {"type": "R", "before": "to generate", "after": ". This ultimately generates", "start_char_pos": 271, "end_char_pos": 282}, {"type": "A", "before": null, "after": "trajectory for each MT", "start_char_pos": 299, "end_char_pos": 299}, {"type": "R", "before": "the obtained", "after": "these", "start_char_pos": 329, "end_char_pos": 341}, {"type": "R", "before": "calculate", "after": "estimate", "start_char_pos": 359, "end_char_pos": 368}, {"type": "D", "before": "Results of this work is expected to help biologists to characterize MT behaviors as well as their potential interactions.", "after": null, "start_char_pos": 384, "end_char_pos": 505}, {"type": "A", "before": null, "after": "proposed", "start_char_pos": 522, "end_char_pos": 522}, {"type": "R", "before": "first use the", "after": "conduct experiments using real and simulated data. We use", "start_char_pos": 537, "end_char_pos": 550}, {"type": "D", "before": "the", "after": null, "start_char_pos": 575, "end_char_pos": 578}, {"type": "R", "before": "produce a large set of", "after": "simulate realistic MT time-lapse image series in our", "start_char_pos": 626, "end_char_pos": 648}, {"type": "R", "before": "We employ this dataset to train our network and optimize its hyperparameters. Then, we utilize the trained model to initialize the network while learning about the", "after": "This dataset is employed as pre-training and hyperparameter optimization for our network before training on the", "start_char_pos": 665, "end_char_pos": 828}, {"type": "A", "before": null, "after": "supervised learning", "start_char_pos": 888, "end_char_pos": 888}, {"type": "A", "before": null, "after": "drastically", "start_char_pos": 958, "end_char_pos": 958}, {"type": "R", "before": "injection", "after": "inclusion", "start_char_pos": 1032, "end_char_pos": 1041}, {"type": "A", "before": null, "after": "deep", "start_char_pos": 1075, "end_char_pos": 1075}, {"type": "A", "before": null, "after": ". Our findings in this work are expected to help biologists characterize the spatial arrangement of MTs, specifically the effects of MT-MT interactions", "start_char_pos": 1169, "end_char_pos": 1169}], "sents_char_pos": [0, 135, 319, 383, 505, 664, 742, 839, 1003]} {"doc_id": "1901.09036", "revision_depth": "2", "before_revision": "We provide excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target model depends on an unknown model that must be to be estimated from data (a \"nuisance model\") . We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target model and one for the nuisance model . We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. We characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance model---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance in the literature : 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.", "after_revision": "We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target parameter depends on an unknown nuisance parameter that must be estimated from data . We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target parameter and one for the nuisance parameter . We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate settings in which the target parameter belongs to a complex nonparametric class. We provide conditions on the metric entropy of the nuisance and target classes such that oracle rates---rates of the same order as if we knew the nuisance parameter---are achieved. We also derive new rates for specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance : 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.", "edit_actions": [{"type": "A", "before": null, "after": "non-asymptotic", "start_char_pos": 11, "end_char_pos": 11}, {"type": "R", "before": "model", "after": "parameter", "start_char_pos": 144, "end_char_pos": 149}, {"type": "R", "before": "model", "after": "nuisance parameter", "start_char_pos": 172, "end_char_pos": 177}, {"type": "D", "before": "to be", "after": null, "start_char_pos": 191, "end_char_pos": 196}, {"type": "D", "before": "(a \"nuisance model\")", "after": null, "start_char_pos": 217, "end_char_pos": 237}, {"type": "R", "before": "model", "after": "parameter", "start_char_pos": 371, "end_char_pos": 376}, {"type": "R", "before": "model", "after": "parameter", "start_char_pos": 402, "end_char_pos": 407}, {"type": "D", "before": "literature", "after": null, "start_char_pos": 867, "end_char_pos": 877}, {"type": "R", "before": "the case where the", "after": "settings in which the", "start_char_pos": 1099, "end_char_pos": 1117}, {"type": "R", "before": "characterize", "after": "provide", "start_char_pos": 1180, "end_char_pos": 1192}, {"type": "A", "before": null, "after": "of the nuisance and target classes", "start_char_pos": 1226, "end_char_pos": 1226}, {"type": "R", "before": "model---are", "after": "parameter---are", "start_char_pos": 1303, "end_char_pos": 1314}, {"type": "R", "before": "analyze the rates achieved by", "after": "derive new rates for", "start_char_pos": 1333, "end_char_pos": 1362}, {"type": "D", "before": "in the literature", "after": null, "start_char_pos": 1614, "end_char_pos": 1631}], "sents_char_pos": [0, 239, 409, 616, 764, 940, 1176, 1324, 1528]} {"doc_id": "1902.09039", "revision_depth": "3", "before_revision": "Microbial communities feature an immense diversity of species and the extent of this diversity correlates with outcomes ranging from ecosystem stability to medical prognoses. Yet the mechanisms underlying microbial diversity are not well understood; simple resource-competition models do not allow for coexistence of a large number of species . However , it was recently shown that assuming the existence of metabolic trade-offs - in allocating a limited enzyme production capacity - can lead to unlimited diversity, when nutrients are steadily supplied. Do such trade-offs permit diversity under more realistic, intermittent conditions of nutrient supply? Here, we demonstrate that in serial dilution culture, metabolic trade-offs allow for high diversity. Unlike the constant environment case , diversity depends on the concentration of nutrients supplied to the community. These changes in diversity are driven by an \"early-bird\" effect in which species specialized for nutrients that are initially more abundant gain a population advantage over other species. We explore the interplay of this effect with different environmental factors and diversity-supporting mechanisms , such as immigration and cross-feeding, and find a variety of ensuing relationships between nutrient supply and diversity . The large variation seen in this simple model suggests that real ecosystems may not obey a single universal relationship between nutrient supply and diversity. To connect to real microbial communities, we validate our growth model against previously published Escherichia coli batch and chemostat experiments and outline potential future experiments to test the model's multispecies predictions .", "after_revision": "Microbial communities feature an immense diversity of species and this diversity is linked with outcomes ranging from ecosystem stability to medical prognoses. Yet the mechanisms underlying microbial diversity are under debate. While simple resource-competition models don't allow for coexistence of a large number of species , it was recently shown that metabolic trade-offs can allow unlimited diversity. Does this diversity persist with more realistic, intermittent nutrient supply? Here, we demonstrate theoretically that in serial dilution culture, metabolic trade-offs allow for high diversity. When a small amount of nutrient is supplied to each batch, the serial dilution dynamics mimic a chemostat-like steady state. If more nutrient is supplied , diversity depends on the amount of nutrient supplied due to an \"early-bird\" effect . The interplay of this effect with different environmental factors and diversity-supporting mechanisms leads to a variety of relationships between nutrient supply and diversity , suggesting that real ecosystems may not obey a universal nutrient-diversity relationship .", "edit_actions": [{"type": "R", "before": "the extent of this diversity correlates", "after": "this diversity is linked", "start_char_pos": 66, "end_char_pos": 105}, {"type": "R", "before": "not well understood;", "after": "under debate. While", "start_char_pos": 229, "end_char_pos": 249}, {"type": "R", "before": "do not", "after": "don't", "start_char_pos": 285, "end_char_pos": 291}, {"type": "D", "before": ". However", "after": null, "start_char_pos": 343, "end_char_pos": 352}, {"type": "D", "before": "assuming the existence of", "after": null, "start_char_pos": 382, "end_char_pos": 407}, {"type": "R", "before": "- in allocating a limited enzyme production capacity - can lead to unlimited diversity, when nutrients are steadily supplied. Do such trade-offs permit diversity under", "after": "can allow unlimited diversity. Does this diversity persist with", "start_char_pos": 429, "end_char_pos": 596}, {"type": "D", "before": "conditions of", "after": null, "start_char_pos": 626, "end_char_pos": 639}, {"type": "A", "before": null, "after": "theoretically", "start_char_pos": 678, "end_char_pos": 678}, {"type": "R", "before": "Unlike the constant environment case", "after": "When a small amount of nutrient is supplied to each batch, the serial dilution dynamics mimic a chemostat-like steady state. If more nutrient is supplied", "start_char_pos": 759, "end_char_pos": 795}, {"type": "R", "before": "concentration of nutrients supplied to the community. These changes in diversity are driven by", "after": "amount of nutrient supplied due to", "start_char_pos": 823, "end_char_pos": 917}, {"type": "R", "before": "in which species specialized for nutrients that are initially more abundant gain a population advantage over other species. We explore the", "after": ". The", "start_char_pos": 941, "end_char_pos": 1079}, {"type": "R", "before": ", such as immigration and cross-feeding, and find", "after": "leads to", "start_char_pos": 1178, "end_char_pos": 1227}, {"type": "D", "before": "ensuing", "after": null, "start_char_pos": 1241, "end_char_pos": 1248}, {"type": "R", "before": ". The large variation seen in this simple model suggests", "after": ", suggesting", "start_char_pos": 1301, "end_char_pos": 1357}, {"type": "R", "before": "single universal relationship between nutrient supply and diversity. To connect to real microbial communities, we validate our growth model against previously published Escherichia coli batch and chemostat experiments and outline potential future experiments to test the model's multispecies predictions", "after": "universal nutrient-diversity relationship", "start_char_pos": 1394, "end_char_pos": 1697}], "sents_char_pos": [0, 174, 249, 344, 554, 656, 758, 876, 1064, 1302, 1462]} {"doc_id": "1902.09059", "revision_depth": "1", "before_revision": "Circadian rhythm is usually represented by the 24-hour biological cycles which are mainly driven by the daily light-dark cycle on earth. Light stimulus is widely used as the control input of the circadian system entrainment. In this paper, we study the light-based minimum-time circadian entrainment problem of mammals, Neurospora, and Drosophila based on mathematical models of clock gene transcription . These models contain 3 or even more nonlinear differential equations , and the optimal light control of the entrainment problem on the full models is hard to solve directly . Two model simplification methods are applied to multi-variable models: (1) the phase response curves (PRC) and the circadian phase dynamics of these models are generated by impulse response simulation; (2) the Principal Orthogonal Decomposition (POD) is performed to reduce these models to ones with merely two variables. The minimum-time entrainment of the circadian phase is performed in a feedback form; the minimum-time entrainment of the second-order model is treated as a two-point boundary valued problem(TPBVP), which is solved by a direct shooting algorithm in terms of the Hamiltonian and Pontryagin Minimum Principle. Eventually, the results in low-order models are used for initializing a gradient descent algorithm , which is introduced in solving the entrainment problem of the full models . In this paper, we present: (1) the application of PRC and direct shooting algorithm on multi-variable nonlinear models; (2) a general process for solving the minimum-time optimal control problem on multi-variable models; (3) the impacts of minimum-time optimal light on clock gene transcription and protein synthesis.", "after_revision": "The light-based minimum-time circadian entrainment problem for mammals, Neurospora, and Drosophila is studied based on the mathematical models of their circadian gene regulation . These models contain high order nonlinear differential equations . Two model simplification methods are applied to these high-order models: the phase response curves (PRC) and the Principal Orthogonal Decomposition (POD) . The variational calculus and a gradient descent algorithm are applied for solving the optimal light input in the high-order models. As the results of the gradient descent algorithm rely heavily on the initial guesses, we use the optimal control of the PRC and the simplified model to initialize the gradient descent algorithm . In this paper, we present: (1) the application of PRC and direct shooting algorithm on high-order nonlinear models; (2) a general process for solving the minimum-time optimal control problem on high-order models; (3) the impacts of minimum-time optimal light on circadian gene transcription and protein synthesis.", "edit_actions": [{"type": "R", "before": "Circadian rhythm is usually represented by the 24-hour biological cycles which are mainly driven by the daily light-dark cycle on earth. Light stimulus is widely used as the control input of the circadian system entrainment. In this paper, we study the", "after": "The", "start_char_pos": 0, "end_char_pos": 252}, {"type": "R", "before": "of", "after": "for", "start_char_pos": 308, "end_char_pos": 310}, {"type": "R", "before": "based on", "after": "is studied based on the", "start_char_pos": 347, "end_char_pos": 355}, {"type": "R", "before": "clock gene transcription", "after": "their circadian gene regulation", "start_char_pos": 379, "end_char_pos": 403}, {"type": "R", "before": "3 or even more", "after": "high order", "start_char_pos": 427, "end_char_pos": 441}, {"type": "D", "before": ", and the optimal light control of the entrainment problem on the full models is hard to solve directly", "after": null, "start_char_pos": 475, "end_char_pos": 578}, {"type": "R", "before": "multi-variable models: (1)", "after": "these high-order models:", "start_char_pos": 629, "end_char_pos": 655}, {"type": "D", "before": "circadian phase dynamics of these models are generated by impulse response simulation; (2) the", "after": null, "start_char_pos": 696, "end_char_pos": 790}, {"type": "R", "before": "is performed to reduce these models to ones with merely two variables. The minimum-time entrainment of the circadian phase is performed in a feedback form; the minimum-time entrainment of the second-order model is treated as a two-point boundary valued problem(TPBVP), which is solved by a direct shooting algorithm in terms of the Hamiltonian and Pontryagin Minimum Principle. Eventually, the results in low-order models are used for initializing a", "after": ". The variational calculus and a gradient descent algorithm are applied for solving the optimal light input in the high-order models. As the results of the", "start_char_pos": 832, "end_char_pos": 1281}, {"type": "R", "before": ", which is introduced in solving the entrainment problem of the full models", "after": "rely heavily on the initial guesses, we use the optimal control of the PRC and the simplified model to initialize the gradient descent algorithm", "start_char_pos": 1309, "end_char_pos": 1384}, {"type": "R", "before": "multi-variable", "after": "high-order", "start_char_pos": 1474, "end_char_pos": 1488}, {"type": "R", "before": "multi-variable", "after": "high-order", "start_char_pos": 1585, "end_char_pos": 1599}, {"type": "R", "before": "clock", "after": "circadian", "start_char_pos": 1657, "end_char_pos": 1662}], "sents_char_pos": [0, 136, 224, 405, 580, 782, 902, 987, 1209, 1386, 1506, 1607]} {"doc_id": "1902.10345", "revision_depth": "1", "before_revision": "With the ubiquity of accelerators , such as FPGAs and GPUs, the complexity of high-performance programming is increasing beyond the skill-set of the average scientist in domains outside of computer science. It is thus imperative to decouple programming paradigms and architecture-specific implementation from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that facilitates high performance application development and optimization. By combining fine-grained data dependencies with high-level control flow , SDFGs are both expressive and amenable to high-level program transformations, such as tiling , vectorization, and double buffering . These transformations are then applied to the SDFG in an interactive process, using extensible pattern matching and graph rewriting. To facilitate this process, we provide a graphical user interface that enables applying transformations, as well as creating custom optimizations, reusable across applications . We demonstrate SDFGs on CPUs, GPUs, and FPGAs , using a wide variety of applications and motifs --- from fundamental computational kernels , through polyhedral applications, to graph analytics. We show that the representation is both expressive and performant , allowing domain scientists to develop applications that can be tuned to approach peak hardware performance without modifying the original scientific code.", "after_revision": "The ubiquity of accelerators in high-performance computing has driven programming complexity beyond the skill-set of the average domain scientist. To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating program definition from its optimization. By combining fine-grained data dependencies with high-level control-flow , SDFGs are both expressive and amenable to program transformations, such as tiling and double-buffering . These transformations are applied to the SDFG in an interactive process, using extensible pattern matching , graph rewriting, and a graphical user interface . We demonstrate SDFGs on CPUs, GPUs, and FPGAs over various motifs --- from fundamental computational kernels to graph analytics. We show that SDFGs deliver competitive performance , allowing domain scientists to develop applications naturally and port them to approach peak hardware performance without modifying the original scientific code.", "edit_actions": [{"type": "R", "before": "With the", "after": "The", "start_char_pos": 0, "end_char_pos": 8}, {"type": "R", "before": ", such as FPGAs and GPUs, the complexity of", "after": "in", "start_char_pos": 34, "end_char_pos": 77}, {"type": "R", "before": "programming is increasing", "after": "computing has driven programming complexity", "start_char_pos": 95, "end_char_pos": 120}, {"type": "R", "before": "scientist in domains outside of computer science. It is thus", "after": "domain scientist. To maintain performance portability in the future, it is", "start_char_pos": 157, "end_char_pos": 217}, {"type": "R", "before": "programming paradigms and architecture-specific implementation", "after": "architecture-specific programming paradigms", "start_char_pos": 241, "end_char_pos": 303}, {"type": "R", "before": "facilitates high performance application development and", "after": "enables separating program definition from its", "start_char_pos": 449, "end_char_pos": 505}, {"type": "R", "before": "control flow", "after": "control-flow", "start_char_pos": 580, "end_char_pos": 592}, {"type": "D", "before": "high-level", "after": null, "start_char_pos": 637, "end_char_pos": 647}, {"type": "R", "before": ", vectorization, and double buffering", "after": "and double-buffering", "start_char_pos": 688, "end_char_pos": 725}, {"type": "D", "before": "then", "after": null, "start_char_pos": 754, "end_char_pos": 758}, {"type": "R", "before": "and graph rewriting. To facilitate this process, we provide", "after": ", graph rewriting, and", "start_char_pos": 840, "end_char_pos": 899}, {"type": "D", "before": "that enables applying transformations, as well as creating custom optimizations, reusable across applications", "after": null, "start_char_pos": 927, "end_char_pos": 1036}, {"type": "R", "before": ", using a wide variety of applications and", "after": "over various", "start_char_pos": 1085, "end_char_pos": 1127}, {"type": "D", "before": ", through polyhedral applications,", "after": null, "start_char_pos": 1178, "end_char_pos": 1212}, {"type": "R", "before": "the representation is both expressive and performant", "after": "SDFGs deliver competitive performance", "start_char_pos": 1246, "end_char_pos": 1298}, {"type": "R", "before": "that can be tuned", "after": "naturally and port them", "start_char_pos": 1352, "end_char_pos": 1369}], "sents_char_pos": [0, 206, 348, 519, 727, 860, 1038, 1232]} {"doc_id": "1903.02758", "revision_depth": "1", "before_revision": "It was conjectured by Gupta et . al. [Combinatorica04] that every planar graph can be embedded into \\ell_1 with constant distortion. However, given an n-vertex weighted planar graph, the best upper bound is only O(\\log n) by Rao [SoCG99]. In this paper we study the terminated case , where there is a set K of terminals, and the goal is to embed only the terminals into \\ell_1 with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into \\ell_1. More generally, suppose that the terminals could be covered by \\gamma faces . In a recent paper Krauthgamer, Lee and Rika SODA19%DIFDELCMD < ] %%% showed an upper bound of O(\\log \\gamma) on the distortion, improving previous results by Lee and Sidiropoulos [STOC09] and Chekuri et . al. [J.Comb.Theory13]. ] Our contribution is a further improvement of the upper bound to O(\\sqrt{\\log\\gamma}). Note that since every planar graph has at most O(n) faces, any further improvement of this result, will imply an improvement upon Rao's long standing upper bound. It is well known that the flow-cut gap equals to the distortion of the best embedding into \\ell_1. In particular , our result provide a polynomial time O(\\sqrt{\\log \\gamma})-approximation to the sparsest cut problem on planar graph , for the case where all the demand pairs can be covered by \\gamma faces.", "after_revision": "It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into \\ell_1 with constant distortion. However, given an n-vertex weighted planar graph, the best upper bound on the distortion is only O(\\log n) , by Rao [SoCG99]. In this paper we study the case where there is a set K of terminals, and the goal is to embed only the terminals into \\ell_1 with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into \\ell_1. The more general case, where the set of terminals can be covered by \\gamma faces %DIFDELCMD < ] %%% , was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of O(\\log \\gamma) by Krauthgamer, Lee and Rika SODA19]. Our contribution is a further improvement on the upper bound to O(\\sqrt{\\log\\gamma}). Since every planar graph has at most O(n) faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into \\ell_1. Therefore , our result provides a polynomial time O(\\sqrt{\\log \\gamma})-approximation to the sparsest cut problem on planar graphs , for the case where all the demand pairs can be covered by \\gamma faces.", "edit_actions": [{"type": "D", "before": ".", "after": null, "start_char_pos": 31, "end_char_pos": 32}, {"type": "A", "before": null, "after": "on the distortion", "start_char_pos": 204, "end_char_pos": 204}, {"type": "A", "before": null, "after": ",", "start_char_pos": 223, "end_char_pos": 223}, {"type": "R", "before": "terminated case ,", "after": "case", "start_char_pos": 268, "end_char_pos": 285}, {"type": "R", "before": "More generally, suppose that the terminals could", "after": "The more general case, where the set of terminals can", "start_char_pos": 561, "end_char_pos": 609}, {"type": "D", "before": ". In a recent paper Krauthgamer, Lee and Rika", "after": null, "start_char_pos": 637, "end_char_pos": 682}, {"type": "D", "before": "SODA19", "after": null, "start_char_pos": 683, "end_char_pos": 689}, {"type": "R", "before": "showed an upper bound of O(\\log \\gamma) on the distortion, improving previous results", "after": ", was studied", "start_char_pos": 708, "end_char_pos": 793}, {"type": "D", "before": ".", "after": null, "start_char_pos": 842, "end_char_pos": 843}, {"type": "A", "before": null, "after": "The state of the art is an upper bound of O(\\log \\gamma) by Krauthgamer, Lee and Rika", "start_char_pos": 867, "end_char_pos": 867}, {"type": "A", "before": null, "after": "SODA19", "start_char_pos": 868, "end_char_pos": 868}, {"type": "A", "before": null, "after": ".", "start_char_pos": 869, "end_char_pos": 869}, {"type": "R", "before": "of", "after": "on", "start_char_pos": 912, "end_char_pos": 914}, {"type": "R", "before": "Note that since", "after": "Since", "start_char_pos": 956, "end_char_pos": 971}, {"type": "R", "before": "of", "after": "on", "start_char_pos": 1039, "end_char_pos": 1041}, {"type": "R", "before": "imply an improvement", "after": "be a major breakthrough, directly improving", "start_char_pos": 1060, "end_char_pos": 1080}, {"type": "R", "before": "It", "after": "Moreover, it", "start_char_pos": 1119, "end_char_pos": 1121}, {"type": "R", "before": "In particular", "after": "Therefore", "start_char_pos": 1218, "end_char_pos": 1231}, {"type": "R", "before": "provide", "after": "provides", "start_char_pos": 1245, "end_char_pos": 1252}, {"type": "R", "before": "graph", "after": "graphs", "start_char_pos": 1345, "end_char_pos": 1350}], "sents_char_pos": [0, 132, 240, 399, 448, 560, 638, 856, 955, 1118, 1217]} {"doc_id": "1903.03588", "revision_depth": "2", "before_revision": "In this work, we give a short introduction on cough detection efforts that were undertaken during the last decade and we describe the solution for automatic cough detection developed for the AioCare portable spirometry system. As the system is intended to be used in a large variety of environments and different patients, we train the algorithm using the large database of spirometry curves which is the NHANES database by the American National Center for Health Statistics. Using such a massive dataset is a novelty in the field. We apply few data preprocessing steps, derive specific features and train different classifiers such as logistic regression (LR), feed forward artificial neural network (ANN), artificial neural network combined with principal component analysis (PCA-ANN), support vector machine (SVM) and random forest (RF) on this data to choose the one of the best performance. The accuracy, sensitivity and specificity of the classifiers were comparable and equaled within the range 91.1 \\div%DIFDELCMD < }%%% 91.2\\%, 81.8 \\div%DIFDELCMD < }%%% 83.8\\% and 95.0 \\div%DIFDELCMD < }%%% 95.9\\% for the test set, respectively. The ANN solution was selected as the final classifier. Classification methodology developed in this study is robust for detecting cough events during spirometry measurements. We also show that it is universal and transferable between different systems as the performance on the NHANES and the AioCare test sets is similar. As far as we know, the solution presented in this work is the first description of the automatic cough detection algorithm based totally on the air flow signals and the first cough detection implemented in the commercial spirometry system that is to be published.", "after_revision": "We give a short introduction to cough detection efforts that were undertaken during the last decade and we describe the solution for automatic cough detection developed for the AioCare portable spirometry system. In contrast to more popular analysis of sound and audio recordings, we fully based our approach on airflow signals only. As the system is intended to be used in a large variety of environments and different patients, we trained and validated the algorithm using AioCare-collected data and the large database of spirometry curves from the NHANES database by the American National Center for Health Statistics. We trained different classifiers, such as logistic regression , feed-forward artificial neural network , support vector machine , and random forest to choose the one with the best performance. The %DIFDELCMD < }%%% %DIFDELCMD < }%%% %DIFDELCMD < }%%% ANN solution was selected as the final classifier. The classification results on the test set (AioCare data) are: 0.86 (sensitivity), 0.91 (specificity), 0.91 (accuracy) and 0.88 (F1 score). The classification methodology developed in this study is robust for detecting cough events during spirometry measurements. As far as we know, the solution presented in this work is the first fully reproducible description of the automatic cough detection algorithm based totally on airflow signals and the first cough detection implemented in a commercial spirometry system that is to be published.", "edit_actions": [{"type": "R", "before": "In this work, we", "after": "We", "start_char_pos": 0, "end_char_pos": 16}, {"type": "R", "before": "on", "after": "to", "start_char_pos": 43, "end_char_pos": 45}, {"type": "A", "before": null, "after": "In contrast to more popular analysis of sound and audio recordings, we fully based our approach on airflow signals only.", "start_char_pos": 227, "end_char_pos": 227}, {"type": "R", "before": "train", "after": "trained and validated", "start_char_pos": 327, "end_char_pos": 332}, {"type": "A", "before": null, "after": "AioCare-collected data and", "start_char_pos": 353, "end_char_pos": 353}, {"type": "R", "before": "which is", "after": "from", "start_char_pos": 394, "end_char_pos": 402}, {"type": "R", "before": "Using such a massive dataset is a novelty in the field. We apply few data preprocessing steps, derive specific features and train different classifiers", "after": "We trained different classifiers,", "start_char_pos": 478, "end_char_pos": 629}, {"type": "R", "before": "(LR), feed forward", "after": ", feed-forward", "start_char_pos": 658, "end_char_pos": 676}, {"type": "R", "before": "(ANN), artificial neural network combined with principal component analysis (PCA-ANN),", "after": ",", "start_char_pos": 703, "end_char_pos": 789}, {"type": "R", "before": "(SVM)", "after": ",", "start_char_pos": 813, "end_char_pos": 818}, {"type": "D", "before": "(RF) on this data", "after": null, "start_char_pos": 837, "end_char_pos": 854}, {"type": "R", "before": "of", "after": "with", "start_char_pos": 873, "end_char_pos": 875}, {"type": "D", "before": "accuracy, sensitivity and specificity of the classifiers were comparable and equaled within the range 91.1", "after": null, "start_char_pos": 902, "end_char_pos": 1008}, {"type": "D", "before": "\\div", "after": null, "start_char_pos": 1009, "end_char_pos": 1013}, {"type": "D", "before": "91.2\\%, 81.8", "after": null, "start_char_pos": 1031, "end_char_pos": 1043}, {"type": "D", "before": "\\div", "after": null, "start_char_pos": 1044, "end_char_pos": 1048}, {"type": "D", "before": "83.8\\% and 95.0", "after": null, "start_char_pos": 1066, "end_char_pos": 1081}, {"type": "D", "before": "\\div", "after": null, "start_char_pos": 1082, "end_char_pos": 1086}, {"type": "D", "before": "95.9\\% for the test set, respectively. The", "after": null, "start_char_pos": 1104, "end_char_pos": 1146}, {"type": "R", "before": "Classification", "after": "The classification results on the test set (AioCare data) are: 0.86 (sensitivity), 0.91 (specificity), 0.91 (accuracy) and 0.88 (F1 score). The classification", "start_char_pos": 1198, "end_char_pos": 1212}, {"type": "D", "before": "We also show that it is universal and transferable between different systems as the performance on the NHANES and the AioCare test sets is similar.", "after": null, "start_char_pos": 1318, "end_char_pos": 1465}, {"type": "A", "before": null, "after": "fully reproducible", "start_char_pos": 1534, "end_char_pos": 1534}, {"type": "R", "before": "the air flow", "after": "airflow", "start_char_pos": 1607, "end_char_pos": 1619}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 1673, "end_char_pos": 1676}], "sents_char_pos": [0, 226, 477, 533, 897, 1142, 1197, 1317, 1465]} {"doc_id": "1903.06696", "revision_depth": "1", "before_revision": "We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The seminal impossibility result of Myerson and Satterthwaite 1983 shows that even for bilateral trade, there is no feasible (IR, truthful and budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and is extremely complex, eluding a precise description. In this paper we present Bulow-Klemperer-style results to circumvent these hurdles in double-auction market settings . We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, that is feasible and simple (in particular, it is deterministic, prior-independent, and anonymous). First, in the setting where the buyers' and sellers' values are sampled i.i.d. from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal in the original market. We then move to a more general setting where the buyers' values are sampled from one distribution , and the sellers' from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' distribution . We present bounds on the number of buyers that, when added, cause BTR in the augmented market to achieve welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms . In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context.", "after_revision": "We consider the problem of welfare maximization in two-sided markets using simple mechanisms that are prior-independent. The Myerson-Satterthwaite impossibility theorem shows that even for bilateral trade, there is no feasible (IR, truthful , budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and is extremely complex, eluding a precise description. We present Bulow-Klemperer-style results to circumvent these hurdles in double-auction markets . We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, deterministic, truthful, prior-independent, anonymous). First, in the setting where buyers' and sellers' values are sampled i.i.d. from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal in the original market. We then move to a more general setting where buyers' values are sampled from one distribution and sellers' from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' . We present bounds on the number of buyers that, when added, guarantees that BTR in the augmented market have welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms , and all of our results extend to adding sellers instead of buyers . In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context.", "edit_actions": [{"type": "D", "before": "seminal impossibility result of Myerson and Satterthwaite", "after": null, "start_char_pos": 125, "end_char_pos": 182}, {"type": "R", "before": "1983", "after": "Myerson-Satterthwaite impossibility theorem", "start_char_pos": 183, "end_char_pos": 187}, {"type": "R", "before": "and", "after": ",", "start_char_pos": 260, "end_char_pos": 263}, {"type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 572, "end_char_pos": 588}, {"type": "R", "before": "market settings", "after": "markets", "start_char_pos": 673, "end_char_pos": 688}, {"type": "R", "before": "that", "after": "which", "start_char_pos": 784, "end_char_pos": 788}, {"type": "R", "before": "it is deterministic,", "after": "deterministic, truthful,", "start_char_pos": 828, "end_char_pos": 848}, {"type": "D", "before": "and", "after": null, "start_char_pos": 868, "end_char_pos": 871}, {"type": "D", "before": "the", "after": null, "start_char_pos": 912, "end_char_pos": 915}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1239, "end_char_pos": 1242}, {"type": "R", "before": ", and the", "after": "and", "start_char_pos": 1292, "end_char_pos": 1301}, {"type": "D", "before": "distribution", "after": null, "start_char_pos": 1427, "end_char_pos": 1439}, {"type": "R", "before": "cause", "after": "guarantees that", "start_char_pos": 1502, "end_char_pos": 1507}, {"type": "R", "before": "to achieve", "after": "have", "start_char_pos": 1536, "end_char_pos": 1546}, {"type": "A", "before": null, "after": ", and all of our results extend to adding sellers instead of buyers", "start_char_pos": 1666, "end_char_pos": 1666}], "sents_char_pos": [0, 120, 410, 571, 690, 883, 1193, 1441, 1610, 1668]} {"doc_id": "1903.11445", "revision_depth": "1", "before_revision": "We study three orientation-based shape descriptors on a set of continuously moving points P : the first principal component, the smallest oriented bounding box and the thinnest strip. Each of these shape descriptors essentially defines a cost capturing the quality of the descriptor (sum of squared distances for principal component, area for oriented bounding box, and width for strip), and uses the orientation that minimizes the cost. This optimal orientation may be very unstable as the points are moving, which is undesirable in many practical scenarios. Alternatively, we can bound the speed with which the orientation of the descriptor may change . However, this can increase the cost (and hence lower the quality ) of the resulting shape descriptor. In this paper we study the trade-off between stability and quality of these shape descriptors. \\% We first show that there is no stateless algorithm, which depends only on the input points in one timestep and not on previous states , that both approximates the minimum cost of a shape descriptor and achieves bounded speed . On the other hand, if we can use the previous state of the shape descriptor to compute the new state, then we can define \"chasing\" algorithms that attempt to follow the optimal orientation with bounded speed. Under mild conditions, we show that chasing algorithms with sufficient bounded speed approximate the optimal cost at every time step for oriented bounding boxes and strips , but not for principal components . The analysis of such chasing algorithms is challenging and has received little attention in literature, hence we believe that our methods used to perform this analysis are of independent interest.", "after_revision": "We study three orientation-based shape descriptors on a set of continuously moving points : the first principal component, the smallest oriented bounding box and the thinnest strip. Each of these shape descriptors essentially defines a cost capturing the quality of the descriptor and uses the orientation that minimizes the cost. This optimal orientation may be very unstable as the points are moving, which is undesirable in many practical scenarios. If we bound the speed with which the orientation of the descriptor may change , this may lower the quality of the resulting shape descriptor. In this paper we study the trade-off between stability and quality of these shape descriptors. We first show that there is no stateless algorithm, an algorithm that keeps no state over time , that both approximates the minimum cost of a shape descriptor and achieves continuous motion for the shape descriptor . On the other hand, if we can use the previous state of the shape descriptor to compute the new state, we can define \"chasing\" algorithms that attempt to follow the optimal orientation with bounded speed. We show that, under mild conditions, chasing algorithms with sufficient bounded speed approximate the optimal cost at all times for oriented bounding boxes and strips . The analysis of such chasing algorithms is challenging and has received little attention in literature, hence we believe that our methods used in this analysis are of independent interest.", "edit_actions": [{"type": "D", "before": "P", "after": null, "start_char_pos": 90, "end_char_pos": 91}, {"type": "R", "before": "(sum of squared distances for principal component, area for oriented bounding box, and width for strip), and", "after": "and", "start_char_pos": 283, "end_char_pos": 391}, {"type": "R", "before": "Alternatively, we can", "after": "If we", "start_char_pos": 560, "end_char_pos": 581}, {"type": "R", "before": ". However, this can increase the cost (and hence", "after": ", this may", "start_char_pos": 654, "end_char_pos": 702}, {"type": "D", "before": ")", "after": null, "start_char_pos": 721, "end_char_pos": 722}, {"type": "D", "before": "\\%", "after": null, "start_char_pos": 853, "end_char_pos": 855}, {"type": "R", "before": "which depends only on the input points in one timestep and not on previous states", "after": "an algorithm that keeps no state over time", "start_char_pos": 908, "end_char_pos": 989}, {"type": "R", "before": "bounded speed", "after": "continuous motion for the shape descriptor", "start_char_pos": 1067, "end_char_pos": 1080}, {"type": "D", "before": "then", "after": null, "start_char_pos": 1185, "end_char_pos": 1189}, {"type": "R", "before": "Under", "after": "We show that, under", "start_char_pos": 1292, "end_char_pos": 1297}, {"type": "D", "before": "we show that", "after": null, "start_char_pos": 1315, "end_char_pos": 1327}, {"type": "R", "before": "every time step", "after": "all times", "start_char_pos": 1409, "end_char_pos": 1424}, {"type": "D", "before": ", but not for principal components", "after": null, "start_char_pos": 1464, "end_char_pos": 1498}, {"type": "R", "before": "to perform", "after": "in", "start_char_pos": 1644, "end_char_pos": 1654}], "sents_char_pos": [0, 183, 437, 559, 655, 757, 852, 1082, 1291, 1500]} {"doc_id": "1904.01121", "revision_depth": "1", "before_revision": "Generative models often use human evaluations to determine and justify progress. Unfortunately, existing human evaluation methods are ad-hoc : there is currently no standardized, validated evaluation that: (1) measures perceptual fidelity, (2) is reliable, (3) separates models into clear rank order, and (4) ensures high-quality measurement without intractable cost. In response, we construct Human-eYe Perceptual Evaluation (HYPE) , a human metric that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) results in separable model performances, and (4) efficient in cost and time. We introduce two methods. The first, HYPE-Time, measures visual perception under adaptive time constraints to determine the minimum length of time (e.g. , 250ms) that model output such as a generated face needs to be visible for people to distinguish it as real or fake. The second, HYPE-Infinity , measures human error rate on fake and real images with no time constraints, maintaining stability and drastically reducing time and cost . We test HYPE across four state-of-the-art generative adversarial networks (GANs) on unconditional image generation using two datasets, the popular CelebAand the newer higher-resolution FFHQ, and two sampling techniques of model outputs. By simulating HYPE 's evaluation multiple times, we demonstrate consistent ranking of different models, identifying StyleGAN with truncation trick sampling (27.6\\% HYPE-Infinity deception rate, with roughly one quarter of images being misclassified by humans) as superior to StyleGAN without truncation (19.0\\%) on FFHQ. See URL for details .", "after_revision": "Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been ad-hoc , neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe Perceptual Evaluation (HYPE) a human benchmark that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) able to produce separable model performances, and (4) efficient in cost and time. We introduce two variants: one that measures visual perception under adaptive time constraints to determine the threshold at which a model's outputs appear real (e.g. 250ms) , and the other a less expensive variant that measures human error rate on fake and real images sans time constraints . We test HYPE across six state-of-the-art generative adversarial networks and two sampling techniques on conditional and unconditional image generation using four datasets: CelebA, FFHQ, CIFAR-10, and ImageNet. We find that HYPE can track model improvements across training epochs, and we confirm via bootstrap sampling that HYPE rankings are consistent and replicable .", "edit_actions": [{"type": "R", "before": "determine and justify progress. Unfortunately, existing human evaluation methods are", "after": "measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been", "start_char_pos": 49, "end_char_pos": 133}, {"type": "R", "before": ": there is currently no standardized, validated evaluation that: (1) measures perceptual fidelity, (2) is reliable, (3) separates models into clear rank order, and (4) ensures high-quality measurement without intractable cost. In response, we construct Human-eYe", "after": ", neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe", "start_char_pos": 141, "end_char_pos": 403}, {"type": "R", "before": ", a human metric", "after": "a human benchmark", "start_char_pos": 433, "end_char_pos": 449}, {"type": "R", "before": "results in", "after": "able to produce", "start_char_pos": 593, "end_char_pos": 603}, {"type": "R", "before": "methods. The first, HYPE-Time,", "after": "variants: one that", "start_char_pos": 687, "end_char_pos": 717}, {"type": "R", "before": "minimum length of time", "after": "threshold at which a model's outputs appear real", "start_char_pos": 794, "end_char_pos": 816}, {"type": "D", "before": ",", "after": null, "start_char_pos": 823, "end_char_pos": 824}, {"type": "D", "before": "that model output such as a generated face needs to be visible for people to distinguish it as real or fake. The second, HYPE-Infinity", "after": null, "start_char_pos": 832, "end_char_pos": 966}, {"type": "A", "before": null, "after": "and the other a less expensive variant that", "start_char_pos": 969, "end_char_pos": 969}, {"type": "R", "before": "with no time constraints, maintaining stability and drastically reducing time and cost", "after": "sans time constraints", "start_char_pos": 1020, "end_char_pos": 1106}, {"type": "R", "before": "four", "after": "six", "start_char_pos": 1129, "end_char_pos": 1133}, {"type": "R", "before": "(GANs) on", "after": "and two sampling techniques on conditional and", "start_char_pos": 1183, "end_char_pos": 1192}, {"type": "R", "before": "two datasets, the popular CelebAand the newer higher-resolution FFHQ, and two sampling techniques of model outputs. By simulating HYPE 's evaluation multiple times, we demonstrate consistent ranking of different models, identifying StyleGAN with truncation trick sampling (27.6\\% HYPE-Infinity deception rate, with roughly one quarter of images being misclassified by humans) as superior to StyleGAN without truncation (19.0\\%) on FFHQ. See URL for details", "after": "four datasets: CelebA, FFHQ, CIFAR-10, and ImageNet. We find that HYPE can track model improvements across training epochs, and we confirm via bootstrap sampling that HYPE rankings are consistent and replicable", "start_char_pos": 1230, "end_char_pos": 1686}], "sents_char_pos": [0, 80, 367, 669, 695, 940, 1108, 1345, 1666]} {"doc_id": "1904.03524", "revision_depth": "1", "before_revision": "Addiction and overdose related to prescription opioids have reached an epidemic level in the U.S. , creating an unprecedented national crisis. This has been exacerbated partly due to the lack of tools for physicians to help predict whether or not a patient will develop opioid use disorder. Prior research lacks the investigation of how machine learning can be applied to a big-data platform to ensure an informed and judicious prescribing of opioids . In this study , we explore the Massachusetts All Payer Claim Data(MA APCD) , a de-identified healthcare claim dataset, and propose a novel framework to examine how na\\\"ive users develop opioid use disorder. We perform several feature engineering techniques to identify the influential demographic and clinical features associated with opioid use disorder from a class imbalanced analytic sample. We then use and compare the predictive power of four well-known machine learning algorithms: logistic regression, random forest, decision tree, and gradient boosting, to predict the risk of such dependency. Results showed that the random forest model outperforms the other three algorithms while determining the features, some of which are consistent with prior clinical findings. We anticipate that this research has the potential for healthcare practitioners to improve the current prescribing practice of opioids , thereby curbing the increasing rate of opioid addiction .", "after_revision": "Overdose related to prescription opioids have reached an epidemic level in the US , creating an unprecedented national crisis. This has been exacerbated partly due to the lack of tools for physicians to help predict the risk of whether a patient will develop opioid use disorder. Little is known about how machine learning can be applied to a big-data platform to ensure an informed , sustained and judicious prescribing of opioids , in particular for commercially insured population. This study explores Massachusetts All Payer Claims Data , a de-identified healthcare dataset, and proposes a machine learning framework to examine how na\\\"ive users develop opioid use disorder. We perform several feature selections techniques to identify influential demographic and clinical features associated with opioid use disorder from a class imbalanced analytic sample. We then compare the predictive power of four well-known machine learning algorithms: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting to predict the risk of opioid use disorder. The study results show that the Random Forest model outperforms the other three algorithms while determining the features, some of which are consistent with prior clinical findings. Moreover, alongside the higher predictive accuracy, the proposed framework is capable of extracting some risk factors that will add significant knowledge to what is already known in the extant literature. We anticipate that this study will help healthcare practitioners improve the current prescribing practice of opioids and contribute to curb the increasing rate of opioid addiction and overdose .", "edit_actions": [{"type": "R", "before": "Addiction and overdose", "after": "Overdose", "start_char_pos": 0, "end_char_pos": 22}, {"type": "R", "before": "U.S.", "after": "US", "start_char_pos": 93, "end_char_pos": 97}, {"type": "R", "before": "whether or not", "after": "the risk of whether", "start_char_pos": 232, "end_char_pos": 246}, {"type": "R", "before": "Prior research lacks the investigation of", "after": "Little is known about", "start_char_pos": 291, "end_char_pos": 332}, {"type": "A", "before": null, "after": ", sustained", "start_char_pos": 414, "end_char_pos": 414}, {"type": "R", "before": ". In this study , we explore the", "after": ", in particular for commercially insured population. This study explores", "start_char_pos": 452, "end_char_pos": 484}, {"type": "R", "before": "Claim Data(MA APCD)", "after": "Claims Data", "start_char_pos": 509, "end_char_pos": 528}, {"type": "D", "before": "claim", "after": null, "start_char_pos": 558, "end_char_pos": 563}, {"type": "R", "before": "propose a novel", "after": "proposes a machine learning", "start_char_pos": 577, "end_char_pos": 592}, {"type": "R", "before": "engineering", "after": "selections", "start_char_pos": 688, "end_char_pos": 699}, {"type": "D", "before": "the", "after": null, "start_char_pos": 723, "end_char_pos": 726}, {"type": "D", "before": "use and", "after": null, "start_char_pos": 858, "end_char_pos": 865}, {"type": "R", "before": "logistic regression, random forest, decision tree, and gradient boosting,", "after": "Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting", "start_char_pos": 943, "end_char_pos": 1016}, {"type": "R", "before": "such dependency. Results showed that the random forest", "after": "opioid use disorder. The study results show that the Random Forest", "start_char_pos": 1040, "end_char_pos": 1094}, {"type": "A", "before": null, "after": "Moreover, alongside the higher predictive accuracy, the proposed framework is capable of extracting some risk factors that will add significant knowledge to what is already known in the extant literature.", "start_char_pos": 1231, "end_char_pos": 1231}, {"type": "R", "before": "research has the potential for healthcare practitioners to", "after": "study will help healthcare practitioners", "start_char_pos": 1256, "end_char_pos": 1314}, {"type": "R", "before": ", thereby curbing", "after": "and contribute to curb", "start_char_pos": 1367, "end_char_pos": 1384}, {"type": "A", "before": null, "after": "and overdose", "start_char_pos": 1425, "end_char_pos": 1425}], "sents_char_pos": [0, 142, 290, 453, 660, 849, 1056, 1230]} {"doc_id": "1904.07657", "revision_depth": "1", "before_revision": "Microstructural geometry plays a critical role in a response of heterogeneous materials. Consequently, methods for generating microstructural samples are becoming an integral part of advanced numerical analyses. Here, we extend the unified frameworkof Sonon and co-workers , developed originally for generating particulate and foam-like microstructural geometries of Periodic Unit Cells, to non-periodic microstructural representations based on the formalism of Wang tiles. The formalism has been recently proposed as a generalization of the Periodic Unit Cell approach, enabling a fast synthesis of arbitrarily large, stochastic microstructural samples from a handful of domains with predefined microstructural compatibility constraints. However, a robust procedure capable of designing complex, three-dimensional, foam-like and cellular morphologies of Wang tiles has been missing . This contribution thus significantly broadens the applicability of the tiling concept. Since the original framework builds on a random sequential addition of particles enhanced with a level-set description , we first devise an analysis based on a connectivity graph of a tile set, resolving the question where should a particle be copied when it intersects a tile boundary. Next, we introduce several modifications to the original algorithm that are necessary to ensure microstructural compatibility in the generalized periodicity setting of Wang tiles. Having a universal procedure for generating tile morphologies at hand , we compare strictly aperiodic and stochastic sets of the same cardinality in terms of reducing the artificial periodicity in reconstructed microstructural samples , and demonstrate the superiority of the vertex-defined tile sets for two-dimensional problems . Finally, we illustrate the capabilities of the algorithm with two- and three-dimensional examples.", "after_revision": "Microstructural geometry plays a critical role in the response of heterogeneous materials. Consequently, methods for generating microstructural samples are increasingly crucial to advanced numerical analyses. We extend Sonon et al.'s unified framework , developed originally for generating particulate and foam-like microstructural geometries of Periodic Unit Cells, to non-periodic microstructural representations based on the formalism of Wang tiles. This formalism has been recently proposed in order to generalize the Periodic Unit Cell approach, enabling a fast synthesis of arbitrarily large, stochastic microstructural samples from a handful of domains with predefined compatibility constraints. However, a robust procedure capable of designing complex, three-dimensional, foam-like and cellular morphologies of Wang tiles has not yet been proposed . This contribution fills the gap by significantly broadening the applicability of the tiling concept. Since the original Sonon et al.'s framework builds on a random sequential addition of particles enhanced with an implicit representation of particle boundaries by the level-set field , we first devise an analysis based on a connectivity graph of a tile set, resolving the question where a particle should be copied when it intersects a tile boundary. Next, we introduce several modifications to the original algorithm that are necessary to ensure microstructural compatibility in the generalized periodicity setting of Wang tiles. Having established a universal procedure for generating tile morphologies , we compare strictly aperiodic and stochastic sets with the same cardinality in terms of reducing the artificial periodicity in reconstructed microstructural samples . We demonstrate the superiority of the vertex-defined tile sets for two-dimensional problems and illustrate the capabilities of the algorithm with two- and three-dimensional examples.", "edit_actions": [{"type": "R", "before": "a", "after": "the", "start_char_pos": 50, "end_char_pos": 51}, {"type": "R", "before": "becoming an integral part of", "after": "increasingly crucial to", "start_char_pos": 154, "end_char_pos": 182}, {"type": "R", "before": "Here, we extend the unified frameworkof Sonon and co-workers", "after": "We extend Sonon et al.'s unified framework", "start_char_pos": 212, "end_char_pos": 272}, {"type": "R", "before": "The", "after": "This", "start_char_pos": 474, "end_char_pos": 477}, {"type": "R", "before": "as a generalization of", "after": "in order to generalize", "start_char_pos": 515, "end_char_pos": 537}, {"type": "D", "before": "microstructural", "after": null, "start_char_pos": 696, "end_char_pos": 711}, {"type": "R", "before": "been missing", "after": "not yet been proposed", "start_char_pos": 870, "end_char_pos": 882}, {"type": "R", "before": "thus significantly broadens", "after": "fills the gap by significantly broadening", "start_char_pos": 903, "end_char_pos": 930}, {"type": "A", "before": null, "after": "Sonon et al.'s", "start_char_pos": 991, "end_char_pos": 991}, {"type": "R", "before": "a", "after": "an implicit representation of particle boundaries by the", "start_char_pos": 1068, "end_char_pos": 1069}, {"type": "R", "before": "description", "after": "field", "start_char_pos": 1080, "end_char_pos": 1091}, {"type": "R", "before": "should a particle", "after": "a particle should", "start_char_pos": 1196, "end_char_pos": 1213}, {"type": "A", "before": null, "after": "established", "start_char_pos": 1447, "end_char_pos": 1447}, {"type": "D", "before": "at hand", "after": null, "start_char_pos": 1503, "end_char_pos": 1510}, {"type": "R", "before": "of", "after": "with", "start_char_pos": 1563, "end_char_pos": 1565}, {"type": "R", "before": ", and", "after": ". We", "start_char_pos": 1676, "end_char_pos": 1681}, {"type": "R", "before": ". Finally, we", "after": "and", "start_char_pos": 1771, "end_char_pos": 1784}], "sents_char_pos": [0, 88, 211, 473, 738, 884, 971, 1259, 1439, 1772]} {"doc_id": "1904.10508", "revision_depth": "1", "before_revision": "Quantum computing and the workings of the brain have many aspects in common and have been attracting increasing attention in academia and industry. The computation in both is parallel and non-discrete at least in time . Though the underlying physical dynamics (e.g., equation of motion) may be deterministic, the observed or interpreted outcomes look probabilistic. Consequently, various investigations have thus been undertaken to understand and mimic the brain on the basis of quantum computing and physics . However, there have been arguments from physics and cognitive science points of view on whether the brain can and have to take advantage of quantum phenomena that need to survive in the macroscopic space-time region at room temperature. This paper presents a unique physical and microscopic computational model of the brain based on an ansatz that the brain computes in a manner similar to quantum computing, not with quantum waves but with classical waves. Log-scale encoding of information in the context of wave-based computing plays a critical role in bridging the gap between Cartesian and Tensor product states of classical and quantum waves. Our model can provide a basis for a unified computing framework of artificial intelligence and quantum computing .", "after_revision": "Quantum computing and the workings of the brain have many aspects in common and have been attracting increasing attention in academia and industry. The computation in both is parallel and non-discrete . Though the underlying physical dynamics (e.g., equation of motion) may be deterministic, the observed or interpreted outcomes are often probabilistic. Consequently, various investigations have been undertaken to understand and reproduce the brain on the basis of quantum physics and computing . However, there have been arguments on whether the brain can and have to take advantage of quantum phenomena that need to survive in the macroscopic space-time region at room temperature. This paper presents a unique microscopic computational model for the brain based on an ansatz that the brain computes in a manner similar to quantum computing, but with classical waves. Log-scale encoding of information in the context of computing with waves is shown to play a critical role in bridging the computing models with classical and quantum waves. Our quantum-inspired computing model opens up a possibility of unifying the computing framework of artificial intelligence and quantum computing beyond quantum machine learning approaches .", "edit_actions": [{"type": "D", "before": "at least in time", "after": null, "start_char_pos": 201, "end_char_pos": 217}, {"type": "R", "before": "look", "after": "are often", "start_char_pos": 346, "end_char_pos": 350}, {"type": "D", "before": "thus", "after": null, "start_char_pos": 408, "end_char_pos": 412}, {"type": "R", "before": "mimic", "after": "reproduce", "start_char_pos": 447, "end_char_pos": 452}, {"type": "R", "before": "computing and physics", "after": "physics and computing", "start_char_pos": 487, "end_char_pos": 508}, {"type": "D", "before": "from physics and cognitive science points of view", "after": null, "start_char_pos": 546, "end_char_pos": 595}, {"type": "D", "before": "physical and", "after": null, "start_char_pos": 777, "end_char_pos": 789}, {"type": "R", "before": "of", "after": "for", "start_char_pos": 822, "end_char_pos": 824}, {"type": "D", "before": "not with quantum waves", "after": null, "start_char_pos": 920, "end_char_pos": 942}, {"type": "R", "before": "wave-based computing plays", "after": "computing with waves is shown to play", "start_char_pos": 1021, "end_char_pos": 1047}, {"type": "R", "before": "gap between Cartesian and Tensor product states of", "after": "computing models with", "start_char_pos": 1080, "end_char_pos": 1130}, {"type": "R", "before": "model can provide a basis for a unified", "after": "quantum-inspired computing model opens up a possibility of unifying the", "start_char_pos": 1164, "end_char_pos": 1203}, {"type": "A", "before": null, "after": "beyond quantum machine learning approaches", "start_char_pos": 1273, "end_char_pos": 1273}], "sents_char_pos": [0, 147, 219, 365, 510, 747, 968, 1159]} {"doc_id": "1904.11392", "revision_depth": "1", "before_revision": "We consider continuous-time Mean-variance (MV) portfolio optimization problem in the Reinforcement Learning (RL) setting . The problem falls into the entropy-regularized relaxed stochastic control framework recently introduced in Wang et al. (2019). We derive the feedback exploration policy as the Gaussiandistribution , with time-decaying variance. Close connections between the entropy-regularized MV and the classical MV are also discussed , including the solvability equivalence and the convergence as exploration decays . Finally, we prove a policy improvement theorem (PIT) for the continuous-time MV problem under both entropy regularization and control relaxation. The PIT leads to an implementable RL algorithm for the continuous-time MV problem. Our algorithm outperforms an adaptive control based method that estimates the underlying parameters in real-time and a state-of-the-art RL method that uses deep neural networks for continuous control problems by a large margin in nearly all simulations.", "after_revision": "We approach the continuous-time mean-variance (MV) portfolio selection with reinforcement learning (RL) . The problem is to achieve the best tradeoff between exploration and exploitation, and is formulated as an entropy-regularized , relaxed stochastic control problem. We prove that the optimal feedback policy for this problem must be Gaussian , with time-decaying variance. We then establish connections between the entropy-regularized MV and the classical MV , including the solvability equivalence and the convergence as exploration weighting parameter decays to zero . Finally, we prove a policy improvement theorem , based on which we devise an implementable RL algorithm . We find that our algorithm outperforms both an adaptive control based method and a deep neural networks based algorithm by a large margin in our simulations.", "edit_actions": [{"type": "R", "before": "consider", "after": "approach the", "start_char_pos": 3, "end_char_pos": 11}, {"type": "R", "before": "Mean-variance", "after": "mean-variance", "start_char_pos": 28, "end_char_pos": 41}, {"type": "R", "before": "optimization problem in the Reinforcement Learning", "after": "selection with reinforcement learning", "start_char_pos": 57, "end_char_pos": 107}, {"type": "D", "before": "setting", "after": null, "start_char_pos": 113, "end_char_pos": 120}, {"type": "R", "before": "falls into the", "after": "is to achieve the best tradeoff between exploration and exploitation, and is formulated as an", "start_char_pos": 135, "end_char_pos": 149}, {"type": "A", "before": null, "after": ",", "start_char_pos": 170, "end_char_pos": 170}, {"type": "R", "before": "framework recently introduced in Wang et al. (2019). We derive the feedback exploration policy as the Gaussiandistribution", "after": "problem. We prove that the optimal feedback policy for this problem must be Gaussian", "start_char_pos": 198, "end_char_pos": 320}, {"type": "R", "before": "Close", "after": "We then establish", "start_char_pos": 352, "end_char_pos": 357}, {"type": "D", "before": "are also discussed", "after": null, "start_char_pos": 426, "end_char_pos": 444}, {"type": "R", "before": "decays", "after": "weighting parameter decays to zero", "start_char_pos": 520, "end_char_pos": 526}, {"type": "R", "before": "(PIT) for the continuous-time MV problem under both entropy regularization and control relaxation. The PIT leads to", "after": ", based on which we devise", "start_char_pos": 576, "end_char_pos": 691}, {"type": "R", "before": "for the continuous-time MV problem. Our algorithm outperforms", "after": ". We find that our algorithm outperforms both", "start_char_pos": 722, "end_char_pos": 783}, {"type": "R", "before": "that estimates the underlying parameters in real-time and a state-of-the-art RL method that uses", "after": "and a", "start_char_pos": 817, "end_char_pos": 913}, {"type": "R", "before": "for continuous control problems", "after": "based algorithm", "start_char_pos": 935, "end_char_pos": 966}, {"type": "R", "before": "nearly all", "after": "our", "start_char_pos": 988, "end_char_pos": 998}], "sents_char_pos": [0, 122, 250, 351, 528, 674, 757]} {"doc_id": "1904.12834", "revision_depth": "2", "before_revision": "This paper presents a framework of developing neural networks for predicting implied volatility surfaces. Conventional financial conditions and empirical evidence related to the implied volatility are incorporated into the neural network architecture design and model training including no static arbitrage, boundaries, asymptotic slope and volatility smile. They are also satisfied empirically by the option data on the S&P 500 index over twenty years. The developed neural network model and its simplified variations outperform the widely used surface stochastic volatility inspired (SSVI) model on the mean average percentage error in both in-sample and out-of-sample datasets. This study has two main methodological contributions. First, an accurate deep learning prediction model is developed and tailored to implied volatility surfaces . Second, a framework, which seamlessly combines data-driven models with financial theories, can be extended and applied to solve other related business problems .", "after_revision": "This paper presents a framework of developing neural networks to predict implied volatility surfaces. It can incorporate the related properties from existing mathematical models and empirical findings, including no static arbitrage, limiting boundaries, asymptotic slope and volatility smile. These properties are also satisfied empirically in our experiments with the option data on the S&P 500 index over 20 years. The developed neural network model outperforms the widely used surface stochastic volatility inspired (SSVI) model and other benchmarked neural network models on the mean average percentage error in both in-sample and out-of-sample datasets. This study has two major contributions. First, it contributes to the recent use of machine learning in finance, and an accurate deep learning implied volatility surface prediction model is obtained . Second, it provides the methodological guidance on how to seamlessly combine data-driven models with domain knowledge in the development of machine learning applications .", "edit_actions": [{"type": "R", "before": "for predicting", "after": "to predict", "start_char_pos": 62, "end_char_pos": 76}, {"type": "R", "before": "Conventional financial conditions and empirical evidence related to the implied volatility are incorporated into the neural network architecture design and model training", "after": "It can incorporate the related properties from existing mathematical models and empirical findings,", "start_char_pos": 106, "end_char_pos": 276}, {"type": "A", "before": null, "after": "limiting", "start_char_pos": 308, "end_char_pos": 308}, {"type": "R", "before": "They", "after": "These properties", "start_char_pos": 360, "end_char_pos": 364}, {"type": "R", "before": "by", "after": "in our experiments with", "start_char_pos": 396, "end_char_pos": 398}, {"type": "R", "before": "twenty", "after": "20", "start_char_pos": 441, "end_char_pos": 447}, {"type": "R", "before": "and its simplified variations outperform", "after": "outperforms", "start_char_pos": 490, "end_char_pos": 530}, {"type": "A", "before": null, "after": "and other benchmarked neural network models", "start_char_pos": 599, "end_char_pos": 599}, {"type": "R", "before": "main methodological", "after": "major", "start_char_pos": 702, "end_char_pos": 721}, {"type": "A", "before": null, "after": "it contributes to the recent use of machine learning in finance, and", "start_char_pos": 744, "end_char_pos": 744}, {"type": "A", "before": null, "after": "implied volatility surface", "start_char_pos": 771, "end_char_pos": 771}, {"type": "R", "before": "developed and tailored to implied volatility surfaces", "after": "obtained", "start_char_pos": 792, "end_char_pos": 845}, {"type": "R", "before": "a framework, which seamlessly combines", "after": "it provides the methodological guidance on how to seamlessly combine", "start_char_pos": 856, "end_char_pos": 894}, {"type": "R", "before": "financial theories, can be extended and applied to solve other related business problems", "after": "domain knowledge in the development of machine learning applications", "start_char_pos": 919, "end_char_pos": 1007}], "sents_char_pos": [0, 105, 359, 454, 682, 736, 847]} {"doc_id": "1905.01302", "revision_depth": "1", "before_revision": "OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become a core technique used in today's working co-editors and adopted in major industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as \"post-OT\" techniques and have made broad claims of superiority over OT solutions, in terms of correctness, time and space complexity, simplicity, etc . Over one decade later, however, OT remains the choice for building the vast majority of co-editors , whereas CRDTis rarely found in working co-editors. Why? Based on comprehensive review and comparison on representative OT and CRDT solutions and co-editors based on them , we present our discoveries in relation to this question and beyond in a series of three articles. In prior work, we have revealed that CRDT is like OT in following the same general transformation approach and CRDT is not natively commutative for concurrent operations in co-editors, which helps to clarify what CRDT really is and is not for co-editors. In this article , we reveal OT and CRDT differences in correctness and complexity by dissecting and examining representative OT and CRDT solutions. We explore how different basic approaches , i.e. the concurrency-centric approach taken by OT and the content-centric approach taken by CRDT, had resulted in different technical challenges , correctness and complexity issues , and solutions . Moreover, we reveal hidden algorithmic flaws with representative CRDT solutions, and discuss common myths and facts related to correctness , time and space complexity , and simplicity of OT and CRDT . We present facts and evidences that refute CRDT claimed advantages over OT .", "after_revision": "OT (Operational Transformation) was invented for supporting real-time co-editors in the late 1980s and has evolved to become core techniques widely used in today's working co-editors and adopted in industrial products. CRDT (Commutative Replicated Data Type) for co-editors was first proposed around 2006, under the name of WOOT (WithOut Operational Transformation). Follow-up CRDT variations are commonly labeled as \"post-OT\" techniques capable of making concurrent operations natively commutative in co-editors. On top of that, CRDT solutions have made broad claims of superiority over OT solutions, and often portrayed OT as an incorrect and inefficient technique . Over one decade later, however, CRDT is rarely found in working co-editors; OT remains the choice for building the vast majority of today's co-editors . Contradictions between the reality and CRDT's purported advantages have been the source of much confusion and debate among co-editing researcher sand developers. To seek truth from facts, we set out to conduct a comprehensive and critical review on representative OT and CRDT solutions and co-editors based on them . From this work, we have made important discoveries about OT and CRDT, and revealed facts and evidences that refute CRDT claims over OT on all accounts. These discoveries help explain the underlying reasons for the choice between OT and CRDT in the real world. We report these results in a series of three articles. In the second article of this series , we reveal the differences between OT and CRDT in their basic approaches to realizing the same general transformation and how such differences had resulted in different challenges and consequential correctness and complexity issues . Moreover, we reveal hidden complexity and algorithmic flaws with representative CRDT solutions, and discuss common myths and facts related to correctness and complexity of OT and CRDT .", "edit_actions": [{"type": "R", "before": "a core technique", "after": "core techniques widely", "start_char_pos": 125, "end_char_pos": 141}, {"type": "D", "before": "major", "after": null, "start_char_pos": 192, "end_char_pos": 197}, {"type": "R", "before": "and", "after": "capable of making concurrent operations natively commutative in co-editors. On top of that, CRDT solutions", "start_char_pos": 438, "end_char_pos": 441}, {"type": "R", "before": "in terms of correctness, time and space complexity, simplicity, etc", "after": "and often portrayed OT as an incorrect and inefficient technique", "start_char_pos": 499, "end_char_pos": 566}, {"type": "A", "before": null, "after": "CRDT is rarely found in working co-editors;", "start_char_pos": 601, "end_char_pos": 601}, {"type": "A", "before": null, "after": "today's", "start_char_pos": 658, "end_char_pos": 658}, {"type": "R", "before": ", whereas CRDTis rarely found in working co-editors. Why? Based on comprehensive review and comparison", "after": ". Contradictions between the reality and CRDT's purported advantages have been the source of much confusion and debate among co-editing researcher sand developers. To seek truth from facts, we set out to conduct a comprehensive and critical review", "start_char_pos": 670, "end_char_pos": 772}, {"type": "R", "before": ", we present our discoveries in relation to this question and beyond in a series of three articles. In prior", "after": ". From this", "start_char_pos": 842, "end_char_pos": 950}, {"type": "R", "before": "revealed that CRDT is like OT in following the same general transformation approach and CRDT is not natively commutative for concurrent operations in co-editors, which helps to clarify what CRDT really is and is not for co-editors. In this article", "after": "made important discoveries about OT and CRDT, and revealed facts and evidences that refute CRDT claims over OT on all accounts. These discoveries help explain the underlying reasons for the choice between OT and CRDT in the real world. We report these results in a series of three articles. In the second article of this series", "start_char_pos": 965, "end_char_pos": 1212}, {"type": "R", "before": "OT and CRDT differences in correctness and complexity by dissecting and examining representative", "after": "the differences between", "start_char_pos": 1225, "end_char_pos": 1321}, {"type": "R", "before": "solutions. We explore how different basic approaches , i.e. the concurrency-centric approach taken by OT and the content-centric approach taken by CRDT,", "after": "in their basic approaches to realizing the same general transformation and how such differences", "start_char_pos": 1334, "end_char_pos": 1486}, {"type": "R", "before": "technical challenges ,", "after": "challenges and consequential", "start_char_pos": 1513, "end_char_pos": 1535}, {"type": "D", "before": ", and solutions", "after": null, "start_char_pos": 1570, "end_char_pos": 1585}, {"type": "A", "before": null, "after": "complexity and", "start_char_pos": 1615, "end_char_pos": 1615}, {"type": "R", "before": ", time and space complexity , and simplicity", "after": "and complexity", "start_char_pos": 1728, "end_char_pos": 1772}, {"type": "D", "before": ". We present facts and evidences that refute CRDT claimed advantages over OT", "after": null, "start_char_pos": 1788, "end_char_pos": 1864}], "sents_char_pos": [0, 218, 366, 568, 727, 941, 1196, 1344, 1587]} {"doc_id": "1905.07546", "revision_depth": "1", "before_revision": "The effects of weather on agriculture in recent years have become a major concern across the globe . Hence, the need for an effective weather risk management tool ( weather derivatives) for agricultural stakeholders . However, most of these stakeholders are unwilling to pay for the price of weather derivatives (WD) because of product-design and geographical basis risks in the pricing models of WD. Using machine learning ensemble technique for crop yield forecasting and feature importance, the major major weather variable (average temperature) that affects crop yields are empirically determined. This variable (average temperature) is used as the underlying index for WD to eliminate product-design basis risks. A model with time-varying speed of mean reversion, seasonal mean, local volatility that depends on the average temperature and time for the contract period is proposed. Based on this model , pricing models for futures, options on futures, and basket futures for cumulative average temperature and growing degree-days are presented. Pricing futures on baskets reduces geographical basis risk as buyer's have the opportunity to select the most appropriate weather stations with their desired weight preference. With these pricing models, agricultural stakeholders can hedge their crops against the perils of weather.", "after_revision": "The effects of weather on agriculture in recent years have become a major global concern . Hence, the need for an effective weather risk management tool ( i.e., weather derivatives) that can hedge crop yields against weather uncertainties . However, most smallholder farmers and agricultural stakeholders are unwilling to pay for the price of weather derivatives (WD) because of the presence of basis risks ( product-design and geographical ) in the pricing models . To eliminate product-design basis risks, a machine learning ensemble technique was used to determine the relationship between maize yield and weather variables. The results revealed that the most significant weather variable that affected the yield of maize was average temperature. A mean-reverting model with a time-varying speed of mean reversion, seasonal mean, and local volatility that depended on the local average temperature was then proposed. The model was extended to a multi-dimensional model for different but correlated locations. Based on these average temperature models , pricing models for futures, options on futures, and basket futures for cumulative average temperature and growing degree-days are presented. Pricing futures on baskets reduces geographical basis risk , as buyers have the opportunity to select the most appropriate weather stations with their desired weight preference. With these pricing models, farmers and agricultural stakeholders can hedge their crops against the perils of extreme weather.", "edit_actions": [{"type": "R", "before": "concern across the globe", "after": "global concern", "start_char_pos": 74, "end_char_pos": 98}, {"type": "A", "before": null, "after": "i.e.,", "start_char_pos": 165, "end_char_pos": 165}, {"type": "R", "before": "for agricultural stakeholders", "after": "that can hedge crop yields against weather uncertainties", "start_char_pos": 187, "end_char_pos": 216}, {"type": "R", "before": "of these", "after": "smallholder farmers and agricultural", "start_char_pos": 233, "end_char_pos": 241}, {"type": "A", "before": null, "after": "the presence of basis risks (", "start_char_pos": 329, "end_char_pos": 329}, {"type": "R", "before": "basis risks", "after": ")", "start_char_pos": 362, "end_char_pos": 373}, {"type": "R", "before": "of WD. Using", "after": ". To eliminate product-design basis risks, a", "start_char_pos": 396, "end_char_pos": 408}, {"type": "R", "before": "for crop yield forecasting and feature importance, the major major weather variable (average temperature) that affects crop yields are empirically determined. This variable (average temperature) is used as the underlying index for WD to eliminate product-design basis risks. A model with", "after": "was used to determine the relationship between maize yield and weather variables. The results revealed that the most significant weather variable that affected the yield of maize was average temperature. A mean-reverting model with a", "start_char_pos": 445, "end_char_pos": 732}, {"type": "A", "before": null, "after": "and", "start_char_pos": 786, "end_char_pos": 786}, {"type": "R", "before": "depends on the average temperature and time for the contract period is proposed. Based on this model", "after": "depended on the local average temperature was then proposed. The model was extended to a multi-dimensional model for different but correlated locations. Based on these average temperature models", "start_char_pos": 809, "end_char_pos": 909}, {"type": "R", "before": "as buyer's", "after": ", as buyers", "start_char_pos": 1112, "end_char_pos": 1122}, {"type": "A", "before": null, "after": "farmers and", "start_char_pos": 1257, "end_char_pos": 1257}, {"type": "A", "before": null, "after": "extreme", "start_char_pos": 1328, "end_char_pos": 1328}], "sents_char_pos": [0, 100, 218, 603, 719, 889, 1052, 1229]} {"doc_id": "1906.00454", "revision_depth": "1", "before_revision": "Environmental stresses such as drought , and heat can cause substantial yield loss in corn hybrids . As such, corn hybrids which are tolerant to drought , and heat would produce more consistent yields compared to the hybrids which are not tolerant to these stresses. In the 2019 Syngenta Crop Challenge, Syngenta released several large datasets that recorded the yield performances of 2,452 maize hybrids planted in 1,560 locations between 2008 and 2017 and asked participants to classify the corn hybrids as either tolerant or susceptible to heat stress, drought stress, and stress due to the combination of heat and drought . As one of the winning teams, we designed a two-step approach to solve this problem in an unsupervised way since no dataset was provided that classified any set of hybrids as tolerant or susceptible to any stress. First, we designed a deep convolutional neural network (CNN) that took advantage of state-of-the-art modeling and solution techniques to extract stress metrics for each types of stress. Our CNN model was found to successfully distinguish between the low and high stress environments due to considering multiple factors such as plant /harvest dates, daily weather, and soil conditions. Then, we conducted a linear regression of the yield of hybrids against each stress metric, and classified the hybrids based on the slope the regression line, since the slope of the regression line showed how sensitive a hybrid was to a specific environmental stress. Our results suggested that only 14 \\% of hybrids were tolerant to at least one type of stress.", "after_revision": "Environmental stresses such as drought and heat can cause substantial yield loss in agriculture . As such, hybrid crops which are tolerant to drought and heat stress would produce more consistent yields compared to the hybrids which are not tolerant to these stresses. In the 2019 Syngenta Crop Challenge, Syngenta released several large datasets that recorded the yield performances of 2,452 corn hybrids planted in 1,560 locations between 2008 and 2017 and asked participants to classify the corn hybrids as either tolerant or susceptible to drought stress, heat stress, and combined drought and heat stress . As one of the winning teams, we designed a two-step approach to solve this problem in an unsupervised way since no data was provided that classified any set of hybrids as tolerant or susceptible to any type of stress. First, we designed a deep convolutional neural network (CNN) that took advantage of state-of-the-art modeling and solution techniques to extract stress metrics for each type of stress. Our CNN model was found to successfully distinguish between the low and high stress environments due to considering multiple factors such as planting /harvest dates, daily weather, and soil conditions. Then, we conducted a linear regression of the yield of hybrid against each stress metric, and classified the hybrid based on the slope of the regression line, since the slope of the regression line showed how sensitive a hybrid was to a specific environmental stress. Our results suggested that only 14 \\% of the corn hybrids were tolerant to at least one type of stress.", "edit_actions": [{"type": "D", "before": ",", "after": null, "start_char_pos": 39, "end_char_pos": 40}, {"type": "R", "before": "corn hybrids", "after": "agriculture", "start_char_pos": 86, "end_char_pos": 98}, {"type": "R", "before": "corn hybrids", "after": "hybrid crops", "start_char_pos": 110, "end_char_pos": 122}, {"type": "R", "before": ", and heat", "after": "and heat stress", "start_char_pos": 153, "end_char_pos": 163}, {"type": "R", "before": "maize", "after": "corn", "start_char_pos": 391, "end_char_pos": 396}, {"type": "R", "before": "heat stress, drought", "after": "drought stress, heat", "start_char_pos": 543, "end_char_pos": 563}, {"type": "R", "before": "stress due to the combination of heat and drought", "after": "combined drought and heat stress", "start_char_pos": 576, "end_char_pos": 625}, {"type": "R", "before": "dataset", "after": "data", "start_char_pos": 743, "end_char_pos": 750}, {"type": "A", "before": null, "after": "type of", "start_char_pos": 833, "end_char_pos": 833}, {"type": "R", "before": "types", "after": "type", "start_char_pos": 1011, "end_char_pos": 1016}, {"type": "R", "before": "plant", "after": "planting", "start_char_pos": 1169, "end_char_pos": 1174}, {"type": "R", "before": "hybrids", "after": "hybrid", "start_char_pos": 1282, "end_char_pos": 1289}, {"type": "R", "before": "hybrids", "after": "hybrid", "start_char_pos": 1337, "end_char_pos": 1344}, {"type": "A", "before": null, "after": "of", "start_char_pos": 1364, "end_char_pos": 1364}, {"type": "A", "before": null, "after": "the corn", "start_char_pos": 1536, "end_char_pos": 1536}], "sents_char_pos": [0, 100, 266, 627, 841, 1027, 1226, 1494]} {"doc_id": "1906.01981", "revision_depth": "2", "before_revision": "We propose a non-robust interpretation of the distributionally robust optimization (DRO) problem by relating the impact of uncertainties around the distribution on the impact of constraining the objective through tail probabilities. Our interpretation allows utility maximizers to figure out the size of the ambiguity set through parameters that are directly linked to the chance parameters . We first show that for general \\phi-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problems , where the ambiguity radius controls investor's risk preference . Based on this non-robust reformulation, we then show that when a boundedness constraint is added to the investment strategy. The DRO problem can be cast as a chance-constrained optimization (CCO) problem without distributional uncertainties . Without the boundedness constraint, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Besides the widely-used Kullback-Leibler (KL) divergence which requires the distribution of the objective function to be exponentially bounded, our results apply to divergence measures that accommodate well heavy tail distribution such as the student t-distribution and the lognormal distribution . Comprehensive testings on synthetic data and real data are provided .", "after_revision": "This paper provides a non-robust interpretation of the distributionally robust optimization (DRO) problem by relating the distributional uncertainties to the chance probabilities. Our analysis allows a decision-maker to interpret the size of the ambiguity set , which is often lack of business meaning, through the chance parameters constraining the objective function . We first show that , for general \\phi-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problems . These mean-deviation problems are not subject to uncertain distributions, and the ambiguity radius in the original DRO problem now plays the role of controlling the risk preference of the decision-maker. We then demonstrate that a DRO problem can be cast as a chance-constrained optimization (CCO) problem when a boundedness constraint is added to the decision variables . Without the boundedness constraint, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Thanks to our high-order expansion result, a notable feature of our analysis is that it applies to divergence measures that accommodate well heavy tail distributions such as the student t-distribution and the lognormal distribution , besides the widely-used Kullback-Leibler (KL) divergence, which requires the distribution of the objective function to be exponentially bounded. Using the portfolio selection problem as an example, our comprehensive testings on multivariate heavy-tail datasets, both synthetic and real-world, shows that this business-interpretation approach is indeed useful and insightful .", "edit_actions": [{"type": "R", "before": "We propose", "after": "This paper provides", "start_char_pos": 0, "end_char_pos": 10}, {"type": "R", "before": "impact of uncertainties around the distribution on the impact of constraining the objective through tail", "after": "distributional uncertainties to the chance", "start_char_pos": 113, "end_char_pos": 217}, {"type": "R", "before": "interpretation allows utility maximizers to figure out", "after": "analysis allows a decision-maker to interpret", "start_char_pos": 237, "end_char_pos": 291}, {"type": "R", "before": "through parameters that are directly linked to", "after": ", which is often lack of business meaning, through", "start_char_pos": 322, "end_char_pos": 368}, {"type": "A", "before": null, "after": "constraining the objective function", "start_char_pos": 391, "end_char_pos": 391}, {"type": "A", "before": null, "after": ",", "start_char_pos": 413, "end_char_pos": 413}, {"type": "R", "before": ", where", "after": ". These mean-deviation problems are not subject to uncertain distributions, and", "start_char_pos": 525, "end_char_pos": 532}, {"type": "R", "before": "controls investor's risk preference . Based on this non-robust reformulation, we then show that when a boundedness constraint is added to the investment strategy. The", "after": "in the original DRO problem now plays the role of controlling the risk preference of the decision-maker. We then demonstrate that a", "start_char_pos": 554, "end_char_pos": 720}, {"type": "R", "before": "without distributional uncertainties", "after": "when a boundedness constraint is added to the decision variables", "start_char_pos": 796, "end_char_pos": 832}, {"type": "R", "before": "Besides the widely-used Kullback-Leibler (KL) divergence which requires the distribution of the objective function to be exponentially bounded,", "after": "Thanks to", "start_char_pos": 1083, "end_char_pos": 1226}, {"type": "R", "before": "results apply", "after": "high-order expansion result, a notable feature of our analysis is that it applies", "start_char_pos": 1231, "end_char_pos": 1244}, {"type": "R", "before": "distribution", "after": "distributions", "start_char_pos": 1301, "end_char_pos": 1313}, {"type": "R", "before": ". Comprehensive testings on synthetic data and real data are provided", "after": ", besides the widely-used Kullback-Leibler (KL) divergence, which requires the distribution of the objective function to be exponentially bounded. Using the portfolio selection problem as an example, our comprehensive testings on multivariate heavy-tail datasets, both synthetic and real-world, shows that this business-interpretation approach is indeed useful and insightful", "start_char_pos": 1380, "end_char_pos": 1449}], "sents_char_pos": [0, 232, 393, 716, 834, 1082]} {"doc_id": "1906.02086", "revision_depth": "2", "before_revision": "Microbial electrolysis cells (MECs) are devices that employ electroactive bacteria to perform extracellular electron transfer, enabling hydrogen generation from biodegradable substrates. In our previous work, we developed and analyzed a differential-algebraic equation (DAE) model for MECs. The model consists of ordinary differential equations ( ODE) resembling chemostats or continuous stirred tank reactors (CSTRs), an ODE for an extracellular mediator involved in electron transfer , and an algebraic constraint for electric current and hydrogen production. Our goal is to determine the outcome of competition between methanogenic archaea and electroactive bacteria, because only the latter contribute to both electric current and correlated hydrogen production. We investigate asymptotic stability in two industrially relevant versions of the model. An important aspect of many chemostat models is the principle of competitive exclusion , which states that only the URLanisms which can grow at the lowest substrate concentration will survive \\to . We show that if methanogens can grow at the lowest substrate concentration, then the equilibrium corresponding to competitive exclusion by methanogens is globally stable. The analogous result for electroactive bacteria is not necessarily true. We show that local asymptotic stability of competitive exclusion by electroactive bacteria is not guaranteed, even in a simplified model. In this case, even if electroactive bacteria can grow at the lowest substrate concentration, a few additional conditions are required to guarantee local stability. We also provide numerical simulations supporting these arguments. Our results suggest operating conditions that are most conducive to success of electroactive bacteria and current or hydrogen production in MECs. This will help identify when methane production or electricity and hydrogen production are favored.", "after_revision": "Microbial electrolysis cells (MECs) employ electroactive bacteria to perform extracellular electron transfer, enabling hydrogen generation from biodegradable substrates. In previous work, we developed and analyzed a differential-algebraic equation (DAE) model for MECs. The model resembles a chemostat with ordinary differential equations ( ODEs) for concentrations of substrate, URLanisms, and an extracellular mediator involved in electron transfer . There is also an algebraic constraint for electric current and hydrogen production. Our goal is to determine the outcome of competition between methanogenic archaea and electroactive bacteria, because only the latter contribute to electric current and resulting hydrogen production. We investigate asymptotic stability in two industrially relevant versions of the model. An important aspect of chemostats models is the principle of competitive exclusion -- only microbes which grow at the lowest substrate concentration will survive as t\\to\\infty . We show that if methanogens grow at the lowest substrate concentration, then the equilibrium corresponding to competitive exclusion by methanogens is globally asymptotically stable. The analogous result for electroactive bacteria is not necessarily true. We show that local asymptotic stability of exclusion by electroactive bacteria is not guaranteed, even in a simplified version of the model. In this case, even if electroactive bacteria can grow at the lowest substrate concentration, a few additional conditions are required to guarantee local asymptotic stability. We also provide numerical simulations supporting these arguments. Our results suggest operating conditions that are most conducive to success of electroactive bacteria and the resulting current and hydrogen production in MECs. This will help identify when methane production or electricity and hydrogen production are favored.", "edit_actions": [{"type": "D", "before": "are devices that", "after": null, "start_char_pos": 36, "end_char_pos": 52}, {"type": "D", "before": "our", "after": null, "start_char_pos": 190, "end_char_pos": 193}, {"type": "R", "before": "consists of", "after": "resembles a chemostat with", "start_char_pos": 301, "end_char_pos": 312}, {"type": "R", "before": "ODE) resembling chemostats or continuous stirred tank reactors (CSTRs), an ODE for an", "after": "ODEs) for concentrations of substrate, URLanisms, and an", "start_char_pos": 347, "end_char_pos": 432}, {"type": "R", "before": ", and", "after": ". There is also", "start_char_pos": 486, "end_char_pos": 491}, {"type": "D", "before": "both", "after": null, "start_char_pos": 709, "end_char_pos": 713}, {"type": "R", "before": "correlated", "after": "resulting", "start_char_pos": 735, "end_char_pos": 745}, {"type": "R", "before": "many chemostat", "after": "chemostats", "start_char_pos": 878, "end_char_pos": 892}, {"type": "R", "before": ", which states that only the URLanisms which can", "after": "-- only microbes which", "start_char_pos": 942, "end_char_pos": 990}, {"type": "A", "before": null, "after": "as t", "start_char_pos": 1047, "end_char_pos": 1047}, {"type": "A", "before": null, "after": "\\infty", "start_char_pos": 1050, "end_char_pos": 1050}, {"type": "D", "before": "can", "after": null, "start_char_pos": 1081, "end_char_pos": 1084}, {"type": "A", "before": null, "after": "asymptotically", "start_char_pos": 1216, "end_char_pos": 1216}, {"type": "D", "before": "competitive", "after": null, "start_char_pos": 1341, "end_char_pos": 1352}, {"type": "A", "before": null, "after": "version of the", "start_char_pos": 1429, "end_char_pos": 1429}, {"type": "A", "before": null, "after": "asymptotic", "start_char_pos": 1590, "end_char_pos": 1590}, {"type": "R", "before": "current or", "after": "the resulting current and", "start_char_pos": 1774, "end_char_pos": 1784}], "sents_char_pos": [0, 186, 290, 561, 766, 854, 1052, 1224, 1297, 1436, 1601, 1667, 1813]} {"doc_id": "1906.05065", "revision_depth": "1", "before_revision": "We present an artificial neural network ( ANN ) approach to value financial derivatives . Atypically to standard ANN applications, practitioners equally use option pricing models to validate market prices and to infer unobserved prices. Importantly, models need to generate realistic arbitrage-free prices, meaning that no option portfolio can lead to risk-free profits. The absence of arbitrage opportunities is guaranteed by penalizing the loss using soft constraints on an extended grid of input values. ANNs can be pre-trained by first calibrating a standard option pricing model, and then training an ANN to a larger synthetic dataset generated from the calibrated model. The parameters transfer as well as the non-arbitrage constraints appear to be particularly useful when only sparse or erroneous data are available. We also explore how deeper ANNs improve over shallower ones, as well as other properties of the network architecture. We benchmark our method against standard option pricing models, such as Heston with and without jumps. We validate our method both on training sets, and testing sets, namely, highlighting both their capacity to reproduce observed prices and predict new ones.", "after_revision": "We present a neural network ( NN ) approach to fit and predict implied volatility surfaces (IVSs) . Atypically to standard NN applications, financial industry practitioners use such models equally to replicate market prices and to value other financial instruments. In other words, low training losses are as important as generalization capabilities. Importantly, IVS models need to generate realistic arbitrage-free option prices, meaning that no portfolio can lead to risk-free profits. We propose an approach guaranteeing the absence of arbitrage opportunities by penalizing the loss using soft constraints . Furthermore, our method can be combined with standard IVS models in quantitative finance, thus providing a NN-based correction when such models fail at replicating observed market prices. This lets practitioners use our approach as a plug-in on top of classical methods. Empirical results show that this approach is particularly useful when only sparse or erroneous data are available. We also quantify the uncertainty of the model predictions in regions with few or no observations. We further explore how deeper NNs improve over shallower ones, as well as other properties of the network architecture. We benchmark our method against standard IVS models. By evaluating our method on both training sets, and testing sets, namely, we highlight both their capacity to reproduce observed prices and predict new ones.", "edit_actions": [{"type": "R", "before": "an artificial", "after": "a", "start_char_pos": 11, "end_char_pos": 24}, {"type": "R", "before": "ANN", "after": "NN", "start_char_pos": 42, "end_char_pos": 45}, {"type": "R", "before": "value financial derivatives", "after": "fit and predict implied volatility surfaces (IVSs)", "start_char_pos": 60, "end_char_pos": 87}, {"type": "R", "before": "ANN applications, practitioners equally use option pricing models to validate", "after": "NN applications, financial industry practitioners use such models equally to replicate", "start_char_pos": 113, "end_char_pos": 190}, {"type": "R", "before": "infer unobserved prices. Importantly,", "after": "value other financial instruments. In other words, low training losses are as important as generalization capabilities. Importantly, IVS", "start_char_pos": 212, "end_char_pos": 249}, {"type": "A", "before": null, "after": "option", "start_char_pos": 299, "end_char_pos": 299}, {"type": "D", "before": "option", "after": null, "start_char_pos": 324, "end_char_pos": 330}, {"type": "R", "before": "The", "after": "We propose an approach guaranteeing the", "start_char_pos": 372, "end_char_pos": 375}, {"type": "D", "before": "is guaranteed", "after": null, "start_char_pos": 411, "end_char_pos": 424}, {"type": "R", "before": "on an extended grid of input values. ANNs can be pre-trained by first calibrating a standard option pricing model, and then training an ANN to a larger synthetic dataset generated from the calibrated model. The parameters transfer as well as the non-arbitrage constraints appear to be", "after": ". Furthermore, our method can be combined with standard IVS models in quantitative finance, thus providing a NN-based correction when such models fail at replicating observed market prices. This lets practitioners use our approach as a plug-in on top of classical methods. Empirical results show that this approach is", "start_char_pos": 471, "end_char_pos": 755}, {"type": "A", "before": null, "after": "quantify the uncertainty of the model predictions in regions with few or no observations. We further", "start_char_pos": 834, "end_char_pos": 834}, {"type": "R", "before": "ANNs", "after": "NNs", "start_char_pos": 854, "end_char_pos": 858}, {"type": "R", "before": "option pricing models, such as Heston with and without jumps. We validate our method both on", "after": "IVS models. By evaluating our method on both", "start_char_pos": 986, "end_char_pos": 1078}, {"type": "R", "before": "highlighting", "after": "we highlight", "start_char_pos": 1120, "end_char_pos": 1132}], "sents_char_pos": [0, 89, 236, 371, 507, 677, 825, 944, 1047]} {"doc_id": "1906.06424", "revision_depth": "1", "before_revision": "Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry may contribute to the visual perceptual processing of facial expressions by influencing early stages of face processing, thus playing a crucial role in understanding the observed emotion . The aim of the present study was to assess whether and how early sensorimotor simulation influences face structural encoding/ processing. A secondary aim was to investigate whether there is a relationship between alexithymic traits and sensorimotor simulation as a mechanism for fine facial expression discrimination. In order to examine the time-course of face processing, we monitored P1 and N170 components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions . An animal discrimination task was implemented as a control condition. In half of the experiment, participants could freely use their facial mimicry whereas , in the other half, they had their facial mimicry blocked by a gel. Our results revealed that the P1 ERP component was not modulated by the mimicry manipulation while the N170 amplitude was larger in the blocked mimicry condition when compared to the free mimicry condition selectively for the face stimuli. Interestingly, this modulation interacted with the alexithymic traits, with a reduction of the N170 amplitude modulation as a function of the facial mimicry manipulationfor participants with the highest levels of alexithymic traits. These results demonstrate that sensorimotor simulation influences visual processing of facial expressions at early stages and that participants with higher alexithymic traits tend to underutilize the sensorimotor simulation system .", "after_revision": "Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry may contribute to the visual processing of facial expressions by influencing early stages . The aim of the present study was to assess whether / how early sensorimotor simulation influences early stages of face processing. A secondary aim was to investigate the relationship between alexithymic traits and sensorimotor simulation . We monitored P1 and N170 components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions while implementing an animal discrimination task as control condition. In half of the experiment, participants could freely use their facial mimicry whereas in the other half, they had their facial mimicry blocked by a gel. Our results revealed that , on average, both P1 and N170 ERP components were not sensitive to mimicry manipulation. However, when taking into account alexithymic traits, a scenario corroborating sensorimotor simulation models emerged, with two dissociable temporal windows affected by mimicry manipulation as a function of alexithymia levels. Specifically, as a function of mimicry manipulation, individuals with lower alexithymic traits showed modulations of the P1 amplitude, while individuals with higher alexithymic traits showed modulations of the later N170). Furthermore, connectivity analysis at the scalp level suggested increased connectivity between sensorimotor and extrastriate visual regions in individuals with lower alexithymic traits compared to individuals with higher alexithymic traits. Overall, we interpreted these ERPs modulations as compensative visual processing under conditions of interference on the sensorimotor processing .", "edit_actions": [{"type": "D", "before": "perceptual", "after": null, "start_char_pos": 257, "end_char_pos": 267}, {"type": "D", "before": "of face processing, thus playing a crucial role in understanding the observed emotion", "after": null, "start_char_pos": 329, "end_char_pos": 414}, {"type": "R", "before": "and", "after": "/", "start_char_pos": 468, "end_char_pos": 471}, {"type": "R", "before": "face structural encoding/", "after": "early stages of face", "start_char_pos": 517, "end_char_pos": 542}, {"type": "R", "before": "whether there is a", "after": "the", "start_char_pos": 590, "end_char_pos": 608}, {"type": "R", "before": "as a mechanism for fine facial expression discrimination. In order to examine the time-course of face processing, we", "after": ". We", "start_char_pos": 677, "end_char_pos": 793}, {"type": "R", "before": ". An", "after": "while implementing an", "start_char_pos": 941, "end_char_pos": 945}, {"type": "R", "before": "was implemented as a", "after": "as", "start_char_pos": 973, "end_char_pos": 993}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1099, "end_char_pos": 1100}, {"type": "R", "before": "the", "after": ", on average, both", "start_char_pos": 1194, "end_char_pos": 1197}, {"type": "R", "before": "ERP component was not modulated by the mimicry manipulation while the N170 amplitude was larger in the blocked mimicry condition when compared to the free mimicry condition selectively for the face stimuli. Interestingly, this modulation interacted with the", "after": "and N170 ERP components were not sensitive to mimicry manipulation. However, when taking into account", "start_char_pos": 1201, "end_char_pos": 1458}, {"type": "R", "before": "with a reduction of the N170 amplitude modulation", "after": "a scenario corroborating sensorimotor simulation models emerged, with two dissociable temporal windows affected by mimicry manipulation as a function of alexithymia levels. Specifically,", "start_char_pos": 1479, "end_char_pos": 1528}, {"type": "R", "before": "the facial mimicry manipulationfor participants with the highest levels of alexithymic traits. These results demonstrate that sensorimotor simulation influences visual processing of facial expressions at early stages and that participants with higher alexithymic traits tend to underutilize the sensorimotor simulation system", "after": "mimicry manipulation, individuals with lower alexithymic traits showed modulations of the P1 amplitude, while individuals with higher alexithymic traits showed modulations of the later N170). Furthermore, connectivity analysis at the scalp level suggested increased connectivity between sensorimotor and extrastriate visual regions in individuals with lower alexithymic traits compared to individuals with higher alexithymic traits. Overall, we interpreted these ERPs modulations as compensative visual processing under conditions of interference on the sensorimotor processing", "start_char_pos": 1546, "end_char_pos": 1871}], "sents_char_pos": [0, 185, 416, 554, 734, 942, 1012, 1167, 1407, 1640]} {"doc_id": "1906.07669", "revision_depth": "1", "before_revision": " A new variable stiffness actuator (VSA) is presented, designedfor reproducing the full versatility of human motions from delicate tasks to precise positioning tasks through teleoperation, and highly dynamic tasks like hammering in particular. Existing VSAs show good performance in terms of either quick stiffness changing time, high output velocity, or variable damping, but not in the combination required for highly dynamic tasks. Goal of the presented design is to reach with one system a stiffness changing time of 50 ms, a peak output velocity of 20 rad/s and variable damping , based on the requirements of literature and our previous research. A prototype was built and its performance measured with three motors in parallel configuration: two responsible for changing the VSAs neutral position and effective stiffness through a lever arm mechanism, and the third acting as variable damper. Its effective stiffness can be changed continuously within 50 ms to 120 ms for small to large stiffness steps, its output velocity is up to 1100 rad/s and its oscillation behavior can be controlled precisely with the variable damper . Its effective stiffness range is 0.2 Nm/rad to 313 Nm/rad and its maximum continuous torque 9.4 Nm. This unique combination makes the new actuator particularly suitable for highly dynamic tasks , while being also very versatile, and makes it especially interesting for teleoperation and human-robot collaboration.", "after_revision": "This study is part of research aiming at increasing the range of dynamic tasks for teleoperated field robotics in order to allow operators to use the full range of human motions without being limited by the dynamics of the robotic manipulator. A new variable impedance actuator (VIA) was designed, capable of reproducing motions through teleoperation from precise positioning tasks to highly dynamic tasks . The design requirements based on previous human user studies were a stiffness changing time of 50 ms, a peak output velocity of 20 rad/s and variable damping allowing to suppress undesired oscillations. This is a unique combination of features that was not met by other VIAs. The new design has three motors in parallel configuration: two responsible for changing the VIA's neutral position and effective stiffness through a sliding pivot point lever mechanism, and the third acting as variable damper. A prototype was built and its performance measured with an effective stiffness changing time of 50 to 120 ms for small to large stiffness steps, nominal output velocity of 16 rad/s and a variable damper with a damping torque from 0 to 3 Nm . Its effective stiffness range is 0.2 to 313 Nm/rad . This concludes that the new actuator is particularly suitable for highly dynamic tasks . At the same time, the new actuator is also very versatile, making it especially interesting for teleoperation and human-robot collaboration.", "edit_actions": [{"type": "A", "before": null, "after": "This study is part of research aiming at increasing the range of dynamic tasks for teleoperated field robotics in order to allow operators to use the full range of human motions without being limited by the dynamics of the robotic manipulator.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "stiffness actuator (VSA) is presented, designedfor reproducing the full versatility of human motions from delicate tasks to", "after": "impedance actuator (VIA) was designed, capable of reproducing motions through teleoperation from", "start_char_pos": 16, "end_char_pos": 139}, {"type": "R", "before": "through teleoperation, and", "after": "to", "start_char_pos": 166, "end_char_pos": 192}, {"type": "R", "before": "like hammering in particular. Existing VSAs show good performance in terms of either quick stiffness changing time, high output velocity, or variable damping, but not in the combination required for highly dynamic tasks. Goal of the presented design is to reach with one system", "after": ". The design requirements based on previous human user studies were", "start_char_pos": 214, "end_char_pos": 491}, {"type": "R", "before": ", based on the requirements of literature and our previous research. A prototype was built and its performance measured with", "after": "allowing to suppress undesired oscillations. This is a unique combination of features that was not met by other VIAs. The new design has", "start_char_pos": 584, "end_char_pos": 708}, {"type": "R", "before": "VSAs", "after": "VIA's", "start_char_pos": 782, "end_char_pos": 786}, {"type": "R", "before": "lever arm", "after": "sliding pivot point lever", "start_char_pos": 838, "end_char_pos": 847}, {"type": "R", "before": "Its effective stiffness can be changed continuously within", "after": "A prototype was built and its performance measured with an effective stiffness changing time of", "start_char_pos": 900, "end_char_pos": 958}, {"type": "D", "before": "ms", "after": null, "start_char_pos": 962, "end_char_pos": 964}, {"type": "R", "before": "its output velocity is up to 1100", "after": "nominal output velocity of 16", "start_char_pos": 1011, "end_char_pos": 1044}, {"type": "R", "before": "its oscillation behavior can be controlled precisely with the variable damper", "after": "a variable damper with a damping torque from 0 to 3 Nm", "start_char_pos": 1055, "end_char_pos": 1132}, {"type": "D", "before": "Nm/rad", "after": null, "start_char_pos": 1172, "end_char_pos": 1178}, {"type": "R", "before": "and its maximum continuous torque 9.4 Nm. This unique combination makes", "after": ". This concludes that", "start_char_pos": 1193, "end_char_pos": 1264}, {"type": "A", "before": null, "after": "is", "start_char_pos": 1282, "end_char_pos": 1282}, {"type": "R", "before": ", while being", "after": ". At the same time, the new actuator is", "start_char_pos": 1330, "end_char_pos": 1343}, {"type": "R", "before": "and makes", "after": "making", "start_char_pos": 1365, "end_char_pos": 1374}], "sents_char_pos": [0, 243, 434, 652, 899, 1134]} {"doc_id": "1906.08256", "revision_depth": "1", "before_revision": "The contributions of this paper are two-fold. We define a new filtration called the cover filtration built from a single cover based on a generalized Jaccard distance. We provide stability results for the cover filtration and show how the construction is equivalent to the Cech filtration under certain settings . We then develop a language and theory for stable paths within this filtration , inspired by ideas of persistent homology . We demonstrate how the filtration and paths can be applied to a variety of applications in which defining a metric is not obvious but a cover is readily available. We demonstrate the usefulness of this construction by employing it in the context of recommendation systems and explainable machine learning. We demonstrate a new perspective for modeling recommendation system data sets that does not require manufacturing a bespoke metric. This extends work on graph-based recommendation systems , allowing a topological perspective . For an explicit example, we look at a movies data set and we find the stable paths identified in our framework represent a sequence of movies constituting a gentle transition and ordering from one genre to another. For explainable machine learning, we apply the Mapper for model induction, providing explanations in the form of paths between subpopulations or observations. Our framework provides an alternative way of building a filtration from a single mapper that is then used to explore stable paths . As a direct illustration, we build a mapper from a supervised machine learning model trained on the FashionMNIST data set . We show that the stable paths in the cover filtration provide improved explanations of relationships between subpopulations of images.", "after_revision": "Two central concepts from topological data analysis are persistence and the Mapper construction. Persistence employs a sequence of objects built on data called a filtration. A Mapper produces insightful summaries of data, and has found widespread applications in diverse areas. We define a new filtration called the cover filtration built from a single cover based on a generalized Steinhaus distance, which is a generalization of Jaccard distance. We prove a stability result: the cover filtrations of two covers are \\alpha/m interleaved, where \\alpha is a bound on bottleneck distance between covers and m is the size of smallest set in either cover. We also show our construction is equivalent to the Cech filtration under certain settings , and the Vietoris-Rips filtration completely determines the cover filtration in all cases . We then develop a theory for stable paths within this filtration . Unlike standard results on stability in topological persistence, our definition of path stability aligns exactly with the above result on stability of cover filtration . We demonstrate how our framework can be employed in a variety of applications where a metric is not obvious but a cover is readily available. First we present a new model for recommendation systems using cover filtration . For an explicit example, stable paths identified on a movies data set represent sequences of movies constituting gentle transitions from one genre to another. As a second application in explainable machine learning, we apply the Mapper for model induction, providing explanations in the form of paths between subpopulations . Stable paths in the Mapper from a supervised machine learning model trained on the FashionMNIST data set provide improved explanations of relationships between subpopulations of images.", "edit_actions": [{"type": "R", "before": "The contributions of this paper are two-fold.", "after": "Two central concepts from topological data analysis are persistence and the Mapper construction. Persistence employs a sequence of objects built on data called a filtration. A Mapper produces insightful summaries of data, and has found widespread applications in diverse areas.", "start_char_pos": 0, "end_char_pos": 45}, {"type": "A", "before": null, "after": "Steinhaus distance, which is a generalization of", "start_char_pos": 150, "end_char_pos": 150}, {"type": "R", "before": "provide stability results for the cover filtration and show how the", "after": "prove a stability result: the cover filtrations of two covers are \\alpha/m interleaved, where \\alpha is a bound on bottleneck distance between covers and m is the size of smallest set in either cover. We also show our", "start_char_pos": 172, "end_char_pos": 239}, {"type": "A", "before": null, "after": ", and the Vietoris-Rips filtration completely determines the cover filtration in all cases", "start_char_pos": 313, "end_char_pos": 313}, {"type": "D", "before": "language and", "after": null, "start_char_pos": 334, "end_char_pos": 346}, {"type": "R", "before": ", inspired by ideas of persistent homology", "after": ". Unlike standard results on stability in topological persistence, our definition of path stability aligns exactly with the above result on stability of cover filtration", "start_char_pos": 394, "end_char_pos": 436}, {"type": "R", "before": "the filtration and paths can be applied to", "after": "our framework can be employed in", "start_char_pos": 458, "end_char_pos": 500}, {"type": "R", "before": "in which defining", "after": "where", "start_char_pos": 527, "end_char_pos": 544}, {"type": "R", "before": "We demonstrate the usefulness of this construction by employing it in the context of recommendation systems and explainable machine learning. We demonstrate a new perspective for modeling recommendation system data sets that does not require manufacturing a bespoke metric. This extends work on graph-based recommendation systems , allowing a topological perspective", "after": "First we present a new model for recommendation systems using cover filtration", "start_char_pos": 603, "end_char_pos": 969}, {"type": "R", "before": "we look at", "after": "stable paths identified on", "start_char_pos": 997, "end_char_pos": 1007}, {"type": "R", "before": "and we find the stable paths identified in our framework represent a sequence", "after": "represent sequences", "start_char_pos": 1026, "end_char_pos": 1103}, {"type": "R", "before": "a gentle transition and ordering", "after": "gentle transitions", "start_char_pos": 1127, "end_char_pos": 1159}, {"type": "R", "before": "For", "after": "As a second application in", "start_char_pos": 1187, "end_char_pos": 1190}, {"type": "R", "before": "or observations. Our framework provides an alternative way of building a filtration from a single mapper that is then used to explore stable paths . As a direct illustration, we build a mapper", "after": ". Stable paths in the Mapper", "start_char_pos": 1329, "end_char_pos": 1521}, {"type": "D", "before": ". We show that the stable paths in the cover filtration", "after": null, "start_char_pos": 1600, "end_char_pos": 1655}], "sents_char_pos": [0, 45, 168, 315, 438, 602, 744, 876, 971, 1186, 1345, 1477, 1601]} {"doc_id": "1906.09528", "revision_depth": "1", "before_revision": "Motivational salience is a mechanism that determines URLanism's current level of attraction to or repulsion from a particular object, event, or outcome. Motivational salience is described by modulating the reward by an externally controlled parameter that remains constant within a single behavioral episode. The vector of perceived values of various outcomes determines motivation of URLanism toward different goals. Organism's behavior should be able to adapt to the varying-in-time motivation vector. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal behavior for different dynamically changing motivation vectors. First, we show that Q-learning neural networks can learn to navigate towards variable goals whose relative salience is determined by a multidimensional motivational vector . Second, we show that a Q-learning network with motivation can learn complex behaviors towards several goals distributed in an environment. Finally, we show that firing patterns displayed by neurons in the ventral pallidum , a basal ganglia structure playing a crucial role in motivated behaviors , are similar to the responses of neuronsin recurrent neural networks trained in similar conditions. Similarly to the pallidum neurons, artificial neural nets contain two different classes of neurons, tuned to reward and punishment. We conclude that reinforcement learning networks can efficiently learn optimal behavior in conditions when reward values are modulated by external motivational processes with arbitrary dynamics. Motivational salience can be viewed as a general-purpose model-free method identifying and capturing changes in subjective or objective values of multiple rewards. Networks with motivationmay also be parts of a larger hierarchical reinforcement learning system in the brain.", "after_revision": "How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards . Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP) , a basal ganglia structure involved in motivated behaviors . We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.", "edit_actions": [{"type": "R", "before": "Motivational salience is a mechanism that determines URLanism's current level of attraction to or repulsion from a particular object, event, or outcome. Motivational salience is described by modulating the reward by an externally controlled parameter that remains constant within a single behavioral episode. The vector of perceived values of various outcomes determines motivation of URLanism toward different goals. Organism's behavior should be able to adapt to the varying-in-time motivation vector.", "after": "How can animals behave effectively in conditions involving different motivational contexts?", "start_char_pos": 0, "end_char_pos": 503}, {"type": "R", "before": "a reinforcement learning framework that relies on neural networks to", "after": "how reinforcement learning neural networks can", "start_char_pos": 521, "end_char_pos": 589}, {"type": "R", "before": "different dynamically changing motivation", "after": "dynamically changing motivational salience", "start_char_pos": 617, "end_char_pos": 658}, {"type": "R", "before": "can learn to navigate towards variable goals whose relative salience is determined by a multidimensional motivational vector", "after": "with motivation can navigate in environment with dynamic rewards", "start_char_pos": 715, "end_char_pos": 839}, {"type": "R", "before": "a Q-learning network with motivation", "after": "such networks", "start_char_pos": 863, "end_char_pos": 899}, {"type": "A", "before": null, "after": "simultaneously directed", "start_char_pos": 928, "end_char_pos": 928}, {"type": "R", "before": "firing patterns displayed by", "after": "in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of", "start_char_pos": 1004, "end_char_pos": 1032}, {"type": "A", "before": null, "after": "(VP)", "start_char_pos": 1065, "end_char_pos": 1065}, {"type": "R", "before": "playing a crucial role", "after": "involved", "start_char_pos": 1094, "end_char_pos": 1116}, {"type": "R", "before": ", are similar to the responses of neuronsin recurrent neural networks trained in similar conditions. Similarly to the pallidum neurons, artificial neural nets contain two different", "after": ". We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned", "start_char_pos": 1140, "end_char_pos": 1320}, {"type": "R", "before": "tuned to reward and punishment.", "after": "responding to positive and negative rewards. Our model generates predictions for the VP connectivity.", "start_char_pos": 1341, "end_char_pos": 1372}, {"type": "R", "before": "reinforcement learning networks can efficiently learn optimal behavior in conditions when reward values are modulated by external motivational processes with arbitrary dynamics. Motivational salience can be viewed as a general-purpose model-free method identifying and capturing changes in subjective or objective values of multiple rewards. Networks with motivationmay also be parts of a larger", "after": "networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how", "start_char_pos": 1390, "end_char_pos": 1785}, {"type": "R", "before": "system", "after": "is implemented", "start_char_pos": 1822, "end_char_pos": 1828}], "sents_char_pos": [0, 152, 308, 417, 503, 667, 841, 981, 1240, 1372, 1567, 1731]} {"doc_id": "1906.10121", "revision_depth": "1", "before_revision": "The prediction of stock prices is an important task in economics, investment and financial decision-making. It hasfor several decades, spurred the interest of many researchers to design stock price predictive models . In this paper , the URLanisms search algorithm, a new metaheuristic algorithm is employed as an efficient method for training feedforward neural networks (FFNN). The training process is used to build a better stock price predictive model. The Straits Times Index, Nikkei 225, NASDAQ Composite, S P 500, and Dow Jones Industrial Average indices were utilized as time series data sets for training and testing proposed predic-tive model. Three evaluation methods namely, Root Mean Squared Error, Mean Absolute Percentage Error and Mean Absolution Deviation are used to compare the results of the implemented model . The computational results obtained revealed that the hybrid Symbiotic Organisms Search Algorithm exhibited outstanding predictive performance when compared to the hybrid Particle Swarm Optimization, Genetic Algorithm, and ARIMA based models. The new model is a promising predictive technique for solving high dimensional nonlinear time series data that are difficult to capture by traditional models.", "after_revision": "The prediction of stock prices is an important task in economics, investment and making financial decisions. This has, for decades, spurred the interest of many researchers to make focused contributions to the design of accurate stock price predictive models ; of which some have been utilized to predict the next day opening and closing prices of the stock indices. This paper proposes the design and implementation of a hybrid URLanisms search trained feedforward neural network model for effective and accurate stock price prediction. The URLanisms search algorithm is used as an efficient optimization technique to train the feedforward neural networks , while the resulting training process is used to build a better stock price prediction model. Furthermore, the study also presents a comparative performance evaluation of three different stock price forecasting models; namely, the particle swarm optimization trained feedforward neural network model, the genetic algorithm trained feedforward neural network model and the well-known ARIMA model. The system developed in support of this study utilizes sixteen stock indices as time series datasets for training and testing purpose. Three statistical evaluation measures are used to compare the results of the implemented models, namely the root mean squared error, the mean absolute percentage error and the mean absolution deviation . The computational results obtained reveal that the URLanisms search trained feedforward neural network model exhibits outstanding predictive performance compared to the other models. However, the performance study shows that the three metaheuristics trained feedforward neural network models have promising predictive competence for solving problems of high dimensional nonlinear time series data , which are difficult to capture by traditional models.", "edit_actions": [{"type": "R", "before": "financial decision-making. It hasfor several", "after": "making financial decisions. This has, for", "start_char_pos": 81, "end_char_pos": 125}, {"type": "R", "before": "design", "after": "make focused contributions to the design of accurate", "start_char_pos": 179, "end_char_pos": 185}, {"type": "R", "before": ". In this paper , the URLanisms search algorithm, a new metaheuristic algorithm is employed", "after": "; of which some have been utilized to predict the next day opening and closing prices of the stock indices. This paper proposes the design and implementation of a hybrid URLanisms search trained feedforward neural network model for effective and accurate stock price prediction. The URLanisms search algorithm is used", "start_char_pos": 216, "end_char_pos": 307}, {"type": "R", "before": "method for training", "after": "optimization technique to train the", "start_char_pos": 324, "end_char_pos": 343}, {"type": "R", "before": "(FFNN). The", "after": ", while the resulting", "start_char_pos": 372, "end_char_pos": 383}, {"type": "D", "before": "predictive model. The Straits Times Index, Nikkei 225, NASDAQ Composite, S", "after": null, "start_char_pos": 439, "end_char_pos": 513}, {"type": "R", "before": "P 500, and Dow Jones Industrial Average indices were utilized", "after": "prediction model. Furthermore, the study also presents a comparative performance evaluation of three different stock price forecasting models; namely, the particle swarm optimization trained feedforward neural network model, the genetic algorithm trained feedforward neural network model and the well-known ARIMA model. The system developed in support of this study utilizes sixteen stock indices", "start_char_pos": 514, "end_char_pos": 575}, {"type": "R", "before": "data sets", "after": "datasets", "start_char_pos": 591, "end_char_pos": 600}, {"type": "R", "before": "proposed predic-tive model. Three evaluation methods namely, Root Mean Squared Error, Mean Absolute Percentage Error and Mean Absolution Deviation", "after": "purpose. Three statistical evaluation measures", "start_char_pos": 626, "end_char_pos": 772}, {"type": "R", "before": "model", "after": "models, namely the root mean squared error, the mean absolute percentage error and the mean absolution deviation", "start_char_pos": 824, "end_char_pos": 829}, {"type": "R", "before": "revealed that the hybrid Symbiotic Organisms Search Algorithm exhibited", "after": "reveal that the URLanisms search trained feedforward neural network model exhibits", "start_char_pos": 867, "end_char_pos": 938}, {"type": "D", "before": "when", "after": null, "start_char_pos": 974, "end_char_pos": 978}, {"type": "R", "before": "hybrid Particle Swarm Optimization, Genetic Algorithm, and ARIMA based models. The new model is a promising predictive technique for solving", "after": "other models. However, the performance study shows that the three metaheuristics trained feedforward neural network models have promising predictive competence for solving problems of", "start_char_pos": 995, "end_char_pos": 1135}, {"type": "R", "before": "that", "after": ", which", "start_char_pos": 1180, "end_char_pos": 1184}], "sents_char_pos": [0, 107, 217, 379, 456, 653, 831, 1073]} {"doc_id": "1906.10893", "revision_depth": "1", "before_revision": "Internet-of-Things (IoT) companies strive to get feedback from users to improve their products and services . However, traditional surveys cannot reflect the actual conditions of customers' due to the limited questions. Besides, survey results are affected by various subjective factors. In contrast, the recorded usages of IoT devices reflect customers' behaviours more comprehensively and accurately. We design an intelligent system to help IoT device manufacturers to take advantage of customers' data and build a machine learning model to predict customers' requirements and possible consumption behaviours with federated learning (FL) technology. The FL consists of two stages: in the first stage, customers train the initial model using the phone and the edge computing server collaboratively. The mobile edge computing server's high computation power can assist customers' training locally. Customers first collect data from various IoT devices using phones, and then download and train the initial model with their data. During the training, customers first extract features using their mobiles, and then add the Laplacian noise to the extracted features based on differential privacy, a formal and popular notion to quantify privacy. After achieving the local model , customers sign on their models respectively and send them to the blockchain. We use the blockchain to replace the centralized aggregator which belongs to the third party in FL. In the second stage, miners calculate the averaged model using the collected models sent from customers. By the end of the crowdsourcing job , one of the miners, who is selected as the temporary leader, uploads the model to the blockchain. Besides, to attract more customers to participate in the crowdsourcing FL , we design an incentive mechanism , which awards participants with coins that can be used to purchase other services provided by the company .", "after_revision": "Home appliance manufacturers strive to obtain feedback from users to improve their products and services to build a smart home system. To help manufacturers develop a smart home system, we design a federated learning (FL) system leveraging the reputation mechanism to assist home appliance manufacturers to train a machine learning model based on customers' data. Then, manufacturers can predict customers' requirements and consumption behaviors in the future. The working flow of the system includes two stages: in the first stage, customers train the initial model provided by the manufacturer using both the mobile phone and the mobile edge computing (MEC) server. Customers collect data from various home appliances using phones, and then they download and train the initial model with their local data. After deriving local models , customers sign on their models and send them to the blockchain. In case customers or manufacturers are malicious, we use the blockchain to replace the centralized aggregator in the traditional FL system. Since records on the blockchain are untampered, malicious customers or manufacturers' activities are traceable. In the second stage, manufacturers select customers URLanizations as miners for calculating the averaged model using received models from customers. By the end of the crowdsourcing task , one of the miners, who is selected as the temporary leader, uploads the model to the blockchain. To protect customers' privacy and improve the test accuracy, we enforce differential privacy on the extracted features and propose a new normalization technique. We experimentally demonstrate that our normalization technique outperforms batch normalization when features are under differential privacy protection. In addition, to attract more customers to participate in the crowdsourcing FL task , we design an incentive mechanism to award participants .", "edit_actions": [{"type": "R", "before": "Internet-of-Things (IoT) companies strive to get", "after": "Home appliance manufacturers strive to obtain", "start_char_pos": 0, "end_char_pos": 48}, {"type": "D", "before": ". However, traditional surveys cannot reflect the actual conditions of customers' due to the limited questions. Besides, survey results are affected by various subjective factors. In contrast, the recorded usages of IoT devices reflect customers' behaviours more comprehensively and accurately. We design an intelligent system", "after": null, "start_char_pos": 108, "end_char_pos": 434}, {"type": "R", "before": "help IoT device manufacturers to take advantage of customers' data and build", "after": "build a smart home system. To help manufacturers develop a smart home system, we design a federated learning (FL) system leveraging the reputation mechanism to assist home appliance manufacturers to train", "start_char_pos": 438, "end_char_pos": 514}, {"type": "R", "before": "to", "after": "based on customers' data. Then, manufacturers can", "start_char_pos": 540, "end_char_pos": 542}, {"type": "R", "before": "possible consumption behaviours with federated learning (FL) technology. The FL consists of", "after": "consumption behaviors in the future. The working flow of the system includes", "start_char_pos": 579, "end_char_pos": 670}, {"type": "R", "before": "using the", "after": "provided by the manufacturer using both the mobile", "start_char_pos": 737, "end_char_pos": 746}, {"type": "D", "before": "edge computing server collaboratively. The", "after": null, "start_char_pos": 761, "end_char_pos": 803}, {"type": "R", "before": "server's high computation power can assist customers' training locally. Customers first", "after": "(MEC) server. Customers", "start_char_pos": 826, "end_char_pos": 913}, {"type": "R", "before": "IoT devices", "after": "home appliances", "start_char_pos": 940, "end_char_pos": 951}, {"type": "A", "before": null, "after": "they", "start_char_pos": 975, "end_char_pos": 975}, {"type": "R", "before": "data. During the training, customers first extract features using their mobiles, and then add the Laplacian noise to the extracted features based on differential privacy, a formal and popular notion to quantify privacy. After achieving the local model", "after": "local data. After deriving local models", "start_char_pos": 1024, "end_char_pos": 1275}, {"type": "D", "before": "respectively", "after": null, "start_char_pos": 1309, "end_char_pos": 1321}, {"type": "R", "before": "We", "after": "In case customers or manufacturers are malicious, we", "start_char_pos": 1355, "end_char_pos": 1357}, {"type": "R", "before": "which belongs to the third party in FL.", "after": "in the traditional FL system. Since records on the blockchain are untampered, malicious customers or manufacturers' activities are traceable.", "start_char_pos": 1415, "end_char_pos": 1454}, {"type": "R", "before": "miners calculate", "after": "manufacturers select customers URLanizations as miners for calculating", "start_char_pos": 1476, "end_char_pos": 1492}, {"type": "R", "before": "the collected models sent", "after": "received models", "start_char_pos": 1518, "end_char_pos": 1543}, {"type": "R", "before": "job", "after": "task", "start_char_pos": 1592, "end_char_pos": 1595}, {"type": "R", "before": "Besides,", "after": "To protect customers' privacy and improve the test accuracy, we enforce differential privacy on the extracted features and propose a new normalization technique. We experimentally demonstrate that our normalization technique outperforms batch normalization when features are under differential privacy protection. In addition,", "start_char_pos": 1695, "end_char_pos": 1703}, {"type": "A", "before": null, "after": "task", "start_char_pos": 1769, "end_char_pos": 1769}, {"type": "R", "before": ", which awards participants with coins that can be used to purchase other services provided by the company", "after": "to award participants", "start_char_pos": 1805, "end_char_pos": 1911}], "sents_char_pos": [0, 109, 219, 287, 402, 651, 799, 897, 1029, 1243, 1354, 1454, 1559, 1694]} {"doc_id": "1906.11573", "revision_depth": "1", "before_revision": "The contribution of structural connectivity to dynamic and often highly variable brain states remains poorly understood. We present a mathematical and computational study suited to assess the structure--function issue . We treat a system of Jansen--Rit neural-mass nodes with heterogeneous structural connections estimated from diffusion MRI data provided by the Human Connectome Project. Direct simulations are used to determine the similarity of functional (inferred from correlated activity between brain areas ) and structural connectivity matrices as a function of the parameters controlling dynamics of a single node , highlighting a non-trivial structure--function relationship in regimes that support limit cycle oscillations. To determine their relationship, we firstly determine the network instabilities that give rise to oscillations, and the set of false bifurcationsthat occur beyond this onset. In particular, we highlight that functional connectivity (FC) is inherited most robustly from structure when node dynamics are poised near a Hopf bifurcation, whilst near false bifurcations, structure only weakly influences function . Secondly, we develop a weakly coupled oscillator description to analyse oscillatory phase-locked states and, furthermore, show how the modular structure of the FC matrix can be predicted using a linear stability analysis. This study thereby emphasises that local dynamics can have a substantial role in shaping large-scale functional brain states.", "after_revision": "The contribution of structural connectivity to functional brain states remains poorly understood. We present a mathematical and computational study suited to assess the structure--function issue , treating a system of Jansen--Rit neural-mass nodes with heterogeneous structural connections estimated from diffusion MRI data provided by the Human Connectome Project. Via direct simulations we determine the similarity of functional (inferred from correlated activity between nodes ) and structural connectivity matrices under variation of the parameters controlling single-node dynamics , highlighting a non-trivial structure--function relationship in regimes that support limit cycle oscillations. To determine their relationship, we firstly calculate network instabilities giving rise to oscillations, and the so-called `false bifurcations' (for which a significant qualitative change in the orbit is observed, without a change of stability) occurring beyond this onset. We highlight that functional connectivity (FC) is inherited robustly from structure when node dynamics are poised near a Hopf bifurcation, whilst near false bifurcations, structure only weakly influences FC . Secondly, we develop a weakly-coupled oscillator description to analyse oscillatory phase-locked states and, furthermore, show how the modular structure of FC matrices can be predicted via linear stability analysis. This study thereby emphasises the substantial role that local dynamics can have in shaping large-scale functional brain states.", "edit_actions": [{"type": "R", "before": "dynamic and often highly variable", "after": "functional", "start_char_pos": 47, "end_char_pos": 80}, {"type": "R", "before": ". We treat", "after": ", treating", "start_char_pos": 218, "end_char_pos": 228}, {"type": "R", "before": "Direct simulations are used to", "after": "Via direct simulations we", "start_char_pos": 389, "end_char_pos": 419}, {"type": "R", "before": "brain areas", "after": "nodes", "start_char_pos": 502, "end_char_pos": 513}, {"type": "R", "before": "as a function", "after": "under variation", "start_char_pos": 553, "end_char_pos": 566}, {"type": "R", "before": "dynamics of a single node", "after": "single-node dynamics", "start_char_pos": 597, "end_char_pos": 622}, {"type": "R", "before": "determine the network instabilities that give", "after": "calculate network instabilities giving", "start_char_pos": 779, "end_char_pos": 824}, {"type": "R", "before": "set of false bifurcationsthat occur", "after": "so-called `false bifurcations' (for which a significant qualitative change in the orbit is observed, without a change of stability) occurring", "start_char_pos": 855, "end_char_pos": 890}, {"type": "R", "before": "In particular, we", "after": "We", "start_char_pos": 910, "end_char_pos": 927}, {"type": "D", "before": "most", "after": null, "start_char_pos": 985, "end_char_pos": 989}, {"type": "R", "before": "function", "after": "FC", "start_char_pos": 1134, "end_char_pos": 1142}, {"type": "R", "before": "weakly coupled", "after": "weakly-coupled", "start_char_pos": 1168, "end_char_pos": 1182}, {"type": "R", "before": "the FC matrix", "after": "FC matrices", "start_char_pos": 1301, "end_char_pos": 1314}, {"type": "R", "before": "using a", "after": "via", "start_char_pos": 1332, "end_char_pos": 1339}, {"type": "A", "before": null, "after": "the substantial role", "start_char_pos": 1397, "end_char_pos": 1397}, {"type": "D", "before": "a substantial role", "after": null, "start_char_pos": 1427, "end_char_pos": 1445}], "sents_char_pos": [0, 120, 219, 388, 734, 909, 1144, 1366]} {"doc_id": "1907.07835", "revision_depth": "2", "before_revision": "EEG signals measure the neuronal activities on different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not exploit the topological structure of EEG signals . In this paper, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition , which is biologically supported and captures both local and global inter-channel relations . Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in our graph neural network where the connection and sparseness of the adjacency matrix are supported by the neurosicience theories of human URLanization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to improve the robustness of our model against cross-subject EEG variations and noisy labels, respectively. To thoroughly evaluate our model, we conduct extensive experiments in both subject-dependent and subject-independent classification settings on two public datasets : SEED and SEED-IV . Our model obtains better performance than competitive baselines such as SVM, DBN, DGCNN, BiDANN, and the state-of-the-art BiHDM in most experimental settings. Our model analysis demonstrates that the proposed biologically supported adjacency matrix and two regularizers contribute consistent and significant gain to the performance . Investigations on the neuronal activities reveal that pre-frontal, parietal and occipital regions may be the most informative regions for emotion recognition, which is consistent with relevant prior studies. In addition, experimental results suggest that global inter-channel relations between the left and right hemispheres are important for emotion recognition and local inter-channel relations between (FP1, AF3), (F6, F8) and (FP2, AF4) may also provide useful information .", "after_revision": "Electroencephalography (EEG) measures the neuronal activities in different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not fully exploit the topology of EEG channels . In this paper, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition . RGNN considers the biological topology among different brain regions to capture both local and global relations among different EEG channels . Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in a graph neural network where the connection and sparseness of the adjacency matrix are inspired by neuroscience theories of human URLanization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to better handle cross-subject EEG variations and noisy labels, respectively. Extensive experiments on two public datasets , SEED and SEED-IV , demonstrate the superior performance of our model than state-of-the-art models in most experimental settings. Moreover, ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of our RGNN model. Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition .", "edit_actions": [{"type": "R", "before": "EEG signals measure", "after": "Electroencephalography (EEG) measures", "start_char_pos": 0, "end_char_pos": 19}, {"type": "R", "before": "on", "after": "in", "start_char_pos": 44, "end_char_pos": 46}, {"type": "R", "before": "exploit the topological structure of EEG signals", "after": "fully exploit the topology of EEG channels", "start_char_pos": 149, "end_char_pos": 197}, {"type": "R", "before": ", which is biologically supported and captures", "after": ". RGNN considers the biological topology among different brain regions to capture", "start_char_pos": 302, "end_char_pos": 348}, {"type": "R", "before": "inter-channel relations", "after": "relations among different EEG channels", "start_char_pos": 371, "end_char_pos": 394}, {"type": "R", "before": "our", "after": "a", "start_char_pos": 490, "end_char_pos": 493}, {"type": "R", "before": "supported by the neurosicience", "after": "inspired by neuroscience", "start_char_pos": 579, "end_char_pos": 609}, {"type": "R", "before": "improve the robustness of our model against", "after": "better handle", "start_char_pos": 795, "end_char_pos": 838}, {"type": "R", "before": "To thoroughly evaluate our model, we conduct extensive experiments in both subject-dependent and subject-independent classification settings", "after": "Extensive experiments", "start_char_pos": 900, "end_char_pos": 1040}, {"type": "R", "before": ":", "after": ",", "start_char_pos": 1064, "end_char_pos": 1065}, {"type": "R", "before": ". Our model obtains better performance than competitive baselines such as SVM, DBN, DGCNN, BiDANN, and the", "after": ", demonstrate the superior performance of our model than", "start_char_pos": 1083, "end_char_pos": 1189}, {"type": "R", "before": "BiHDM", "after": "models", "start_char_pos": 1207, "end_char_pos": 1212}, {"type": "R", "before": "Our model analysis demonstrates", "after": "Moreover, ablation studies show", "start_char_pos": 1244, "end_char_pos": 1275}, {"type": "D", "before": "biologically supported", "after": null, "start_char_pos": 1294, "end_char_pos": 1316}, {"type": "R", "before": ". Investigations", "after": "of our RGNN model. Finally, investigations", "start_char_pos": 1417, "end_char_pos": 1433}, {"type": "R", "before": "that pre-frontal, parietal and occipital regions may be the most informative regions for emotion recognition, which is consistent with relevant prior studies. In addition, experimental results suggest that global", "after": "important brain regions and", "start_char_pos": 1468, "end_char_pos": 1680}, {"type": "R", "before": "between the left and right hemispheres are important for emotion recognition and local inter-channel relations between (FP1, AF3), (F6, F8) and (FP2, AF4) may also provide useful information", "after": "for EEG-based emotion recognition", "start_char_pos": 1705, "end_char_pos": 1895}], "sents_char_pos": [0, 86, 199, 641, 899, 1243, 1418, 1626]} {"doc_id": "1907.08047", "revision_depth": "1", "before_revision": "Developed countries are increasingly relying on gas storage to ensure security of supply. In this article we consider an approach to gas storage valuation in which the information about the time at which the holder of a gas storage contract should choose to inject or withdraw gas is modelled using a Brownian bridge that starts at zero and is conditioned to equal a constant x in the time of injection and a constant y in the time of withdrawal. This enables to catch some empirical facts on the behavior of gas storage markets: when the Brownian bridge process is away from the boundaries x and y, the holder of the gas storage contract can be relatively sure that the best decision is to do nothing. However, when the bridge information process absorbs y, the holder of the contract should take the decision of withdrawing gas on the other hand, they should take the decision to inject gas when the process absorbs x . In this sense the Brownian bridge information process leaks information concerning the time at which the holder of a storage contract can choose to inject gas, do nothing, or withdraw gas. The issue of storage valuation is not limited to gas markets, storages also plays a significant, balancing role in, for example, oil markets, soft commodity markets and even electricity. The principle of our approach is applicable to those markets as well. In this paper we define and study the Brownian bridge with random length and pinning point. Its main objectives is to see if the properties of Brownian bridges with deterministic length and pinning point remain valid in case its length and pinning point are random. Amongst other we prove that the bridge fails to be Markovian for pinning points having a law, which is absolutely continuous with respect to the Lebesgue measure .", "after_revision": "In this paper, we define and study the Brownian bridge with random length and pinning point. One of the main results of this work is that, unlike the deterministic case, the bridge fails to be Markovian for pinning points, the law of which is absolutely continuous with respect to the Lebesgue measure. As an application, we propose a new approach to gas storage valuation . We model the flow of information concerning the time at which the holders of a gas storage contract should inject or withdraw gas , by using a Brownian bridge starting from zero and conditioned to be equal to a constant x at the time of injection , and to a constant y at the time of withdrawal. This enables us to catch some empirical facts on the behavior of gas storage markets: when the Brownian bridge process is away from the boundaries x and y, the holders of the gas storage contract can be relatively sure that the best decision is to do nothing. However, when this process absorbs y, the holders of the contract should take the decision of withdrawing gas , otherwise, when the process absorbs x , they should take the decision of injecting gas .", "edit_actions": [{"type": "R", "before": "Developed countries are increasingly relying on gas storage to ensure security of supply. In this article we consider an", "after": "In this paper, we define and study the Brownian bridge with random length and pinning point. One of the main results of this work is that, unlike the deterministic case, the bridge fails to be Markovian for pinning points, the law of which is absolutely continuous with respect to the Lebesgue measure. As an application, we propose a new", "start_char_pos": 0, "end_char_pos": 120}, {"type": "R", "before": "in which the information about", "after": ". We model the flow of information concerning", "start_char_pos": 155, "end_char_pos": 185}, {"type": "R", "before": "holder", "after": "holders", "start_char_pos": 208, "end_char_pos": 214}, {"type": "D", "before": "choose to", "after": null, "start_char_pos": 248, "end_char_pos": 257}, {"type": "R", "before": "is modelled", "after": ", by", "start_char_pos": 281, "end_char_pos": 292}, {"type": "R", "before": "that starts at zero and is conditioned to equal", "after": "starting from zero and conditioned to be equal to", "start_char_pos": 317, "end_char_pos": 364}, {"type": "R", "before": "in", "after": "at", "start_char_pos": 378, "end_char_pos": 380}, {"type": "R", "before": "and", "after": ", and to", "start_char_pos": 403, "end_char_pos": 406}, {"type": "R", "before": "in", "after": "at", "start_char_pos": 420, "end_char_pos": 422}, {"type": "A", "before": null, "after": "us", "start_char_pos": 460, "end_char_pos": 460}, {"type": "R", "before": "holder", "after": "holders", "start_char_pos": 605, "end_char_pos": 611}, {"type": "R", "before": "the bridge information", "after": "this", "start_char_pos": 718, "end_char_pos": 740}, {"type": "R", "before": "holder", "after": "holders", "start_char_pos": 764, "end_char_pos": 770}, {"type": "R", "before": "on the other hand, they should take the decision to inject gas", "after": ", otherwise,", "start_char_pos": 831, "end_char_pos": 893}, {"type": "R", "before": ". In this sense the Brownian bridge information process leaks information concerning the time at which the holder of a storage contract can choose to inject gas, do nothing, or withdraw gas. The issue of storage valuation is not limited to gas markets, storages also plays a significant, balancing role in, for example, oil markets, soft commodity markets and even electricity. The principle of our approach is applicable to those markets as well. In this paper we define and study the Brownian bridge with random length and pinning point. Its main objectives is to see if the properties of Brownian bridges with deterministic length and pinning point remain valid in case its length and pinning point are random. Amongst other we prove that the bridge fails to be Markovian for pinning points having a law, which is absolutely continuous with respect to the Lebesgue measure", "after": ", they should take the decision of injecting gas", "start_char_pos": 921, "end_char_pos": 1796}], "sents_char_pos": [0, 89, 446, 703, 1111, 1298, 1368, 1460, 1634]} {"doc_id": "1907.08481", "revision_depth": "1", "before_revision": "Apis mellifera is the worldwide most important pollinator of crops, and a keystone species for most of the angiosperms (flowering plants) which constitute our feeding, making it crucial to our health and survival. The composition and configuration of the landscape in which Apis mellifera lives will likely determine its well being and the pollination service it can provide to the crops. Here we built a spatially explicit model which predicts the spatial distribution of visits to crop of Apis mellifera by simulating each day the trips of honey bees in the crops , the demographical dynamic of each hive , and their honey production. This model goes beyond existing approaches by including 1) a flower resource affected by the nectar and pollen extractionby the pollinators, 2) a pollinator growth dynamic which allows competition through short term resource depletion, 3) a probabilistic approach of the foraging behavior, modeling the fact that the pollinator has only a partial knowledge of the resource on its surroundings, and 4) the unique foraging habits of Apis mellifera regarding its election of foraging sites. With this simple but quite realistic model we show the importance of keeping a minimal fraction of natural habitat in an agricultural landscape. We also evaluate the effects of the landscape structure on the pollination, and demonstrate that it exists an optimal size of natural habitat patches which maximize the pollination service for a fixed fraction of natural habitat . Finally, we reveal the effect of the distinct ways of mixing culture on the number of visits to the crop .", "after_revision": "Apis mellifera plays a crucial role as pollinator of the majority of crops linked to food production and thus its presence is currently fundamental to our health and survival. The composition and configuration of the landscape in which Apis mellifera lives will likely determine the well-being of the hives and the pollination service that their members can provide to the crops. Here we present a spatially explicit model that predicts the spatial distribution of visits by Apis mellifera to crops, by simulating daily trips of honey bees , the demographical dynamic of each hive and their honey production. This model goes beyond existing approaches by including 1) a flower resource affected by the feedback interaction between nectar extraction, pollination, blossoming and repeated visits, 2) a pollinators dynamic that allows competition through short term resource depletion, 3) a probabilistic approach of the foraging behavior, modeling the fact that the pollinators have only partial knowledge of the resource on their surroundings, and 4) the specific and systematic foraging behavior and strategies of Apis mellifera at the moment of choosing foraging sites, as opposed to those adopted by solitary and wild pollinators. With a balance between simplicity and realism we show the importance of keeping a minimal fraction of natural habitat in an agricultural landscape. We also evaluate the effects of the landscape 's structure on pollination, and demonstrate that there exists an optimal size of natural habitat patches that maximizes the pollination service for a fixed fraction of natural habitat .", "edit_actions": [{"type": "R", "before": "is the worldwide most important pollinator of crops, and a keystone species for most of the angiosperms (flowering plants) which constitute our feeding, making it crucial", "after": "plays a crucial role as pollinator of the majority of crops linked to food production and thus its presence is currently fundamental", "start_char_pos": 15, "end_char_pos": 185}, {"type": "R", "before": "its well being", "after": "the well-being of the hives", "start_char_pos": 317, "end_char_pos": 331}, {"type": "R", "before": "it", "after": "that their members", "start_char_pos": 360, "end_char_pos": 362}, {"type": "R", "before": "built", "after": "present", "start_char_pos": 397, "end_char_pos": 402}, {"type": "R", "before": "which", "after": "that", "start_char_pos": 430, "end_char_pos": 435}, {"type": "R", "before": "to crop of Apis mellifera by simulating each day the", "after": "by Apis mellifera to crops, by simulating daily", "start_char_pos": 480, "end_char_pos": 532}, {"type": "D", "before": "in the crops", "after": null, "start_char_pos": 553, "end_char_pos": 565}, {"type": "D", "before": ",", "after": null, "start_char_pos": 607, "end_char_pos": 608}, {"type": "R", "before": "nectar and pollen extractionby the pollinators,", "after": "feedback interaction between nectar extraction, pollination, blossoming and repeated visits,", "start_char_pos": 730, "end_char_pos": 777}, {"type": "R", "before": "pollinator growth dynamic which", "after": "pollinators dynamic that", "start_char_pos": 783, "end_char_pos": 814}, {"type": "R", "before": "pollinator has only a", "after": "pollinators have only", "start_char_pos": 954, "end_char_pos": 975}, {"type": "R", "before": "its", "after": "their", "start_char_pos": 1013, "end_char_pos": 1016}, {"type": "R", "before": "unique foraging habits", "after": "specific and systematic foraging behavior and strategies", "start_char_pos": 1042, "end_char_pos": 1064}, {"type": "R", "before": "regarding its election of foraging sites. With this simple but quite realistic model", "after": "at the moment of choosing foraging sites, as opposed to those adopted by solitary and wild pollinators. With a balance between simplicity and realism", "start_char_pos": 1083, "end_char_pos": 1167}, {"type": "R", "before": "structure on the", "after": "'s structure on", "start_char_pos": 1316, "end_char_pos": 1332}, {"type": "R", "before": "it", "after": "there", "start_char_pos": 1367, "end_char_pos": 1369}, {"type": "R", "before": "which maximize", "after": "that maximizes", "start_char_pos": 1420, "end_char_pos": 1434}, {"type": "D", "before": ". Finally, we reveal the effect of the distinct ways of mixing culture on the number of visits to the crop", "after": null, "start_char_pos": 1499, "end_char_pos": 1605}], "sents_char_pos": [0, 213, 388, 636, 1124, 1269, 1500]} {"doc_id": "1907.09738", "revision_depth": "2", "before_revision": "Quantitative analysis of cell nuclei in microscopic images is an essential yet still challenging source of biological and pathological information. The major challenge is accurate detection and segmentation of densely packed nuclei in images acquired under a variety of conditions. With sufficient training examples, Mask R-CNN-based methods have achieved state-of-the-art nucleus segmentation. However, the current pipeline requires fully annotated training images, which are time consuming to create and sometimes infeasible because of the noisynature of microscopic images . Importantly, nuclei often have similar appearances within the same image ; this similarity could be utilized to segment nuclei with only partially labeled training examples. We propose a simple yet effective region proposal module for the current Mask R-CNN pipeline to perform few-exemplar learning. To capture the similarities between the unlabeled regions and labeled nuclei, we apply decomposed self-attention to the learned features. On the self-attention map, we observe strong activation at the centers and edges of all nuclei, including the unlabeled ones . On this basis, our region proposal module propagates the partial annotations to the whole image and then proposes effective bounding boxes for the bounding box regression and binary mask generation modules. When trained with only 1/4 of the nuclei annotated, the baseline pipeline gives frequent false negatives, while our approach retains detection accuracy comparable to that of training with the fully annotated data. Moreover, our method can serve as a bootstrapping step to create a full annotation of a dataset, where annotations are iteratively generated and corrected until the predetermined coverage and accuracy are reached. The source code is available at URL", "after_revision": "Quantitative analysis of cell nuclei in microscopic images is an essential yet challenging source of biological and pathological information. The major challenge is accurate detection and segmentation of densely packed nuclei in images acquired under a variety of conditions. Mask R-CNN-based methods have achieved state-of-the-art nucleus segmentation. However, the current pipeline requires fully annotated training images, which are time consuming to create and sometimes noisy . Importantly, nuclei often appear similar within the same image . This similarity could be utilized to segment nuclei with only partially labeled training examples. We propose a simple yet effective region-proposal module for the current Mask R-CNN pipeline to perform few-exemplar learning. To capture the similarities between unlabeled regions and labeled nuclei, we apply decomposed self-attention to learned features. On the self-attention map, we observe strong activation at the centers and edges of all nuclei, including unlabeled nuclei . On this basis, our region-proposal module propagates partial annotations to the whole image and proposes effective bounding boxes for the bounding box-regression and binary mask-generation modules. Our method effectively learns from unlabeled regions thereby improving detection performance. We test our method with various nuclear images. When trained with only 1/4 of the nuclei annotated, our approach retains a detection accuracy comparable to that from training with fully annotated data. Moreover, our method can serve as a bootstrapping step to create full annotations of datasets, iteratively generating and correcting annotations until a predetermined coverage and accuracy are reached. The source code is available at URL", "edit_actions": [{"type": "D", "before": "still", "after": null, "start_char_pos": 79, "end_char_pos": 84}, {"type": "D", "before": "With sufficient training examples,", "after": null, "start_char_pos": 282, "end_char_pos": 316}, {"type": "R", "before": "infeasible because of the noisynature of microscopic images", "after": "noisy", "start_char_pos": 516, "end_char_pos": 575}, {"type": "R", "before": "have similar appearances", "after": "appear similar", "start_char_pos": 604, "end_char_pos": 628}, {"type": "R", "before": "; this", "after": ". This", "start_char_pos": 651, "end_char_pos": 657}, {"type": "R", "before": "region proposal", "after": "region-proposal", "start_char_pos": 786, "end_char_pos": 801}, {"type": "D", "before": "the", "after": null, "start_char_pos": 915, "end_char_pos": 918}, {"type": "D", "before": "the", "after": null, "start_char_pos": 995, "end_char_pos": 998}, {"type": "R", "before": "the unlabeled ones", "after": "unlabeled nuclei", "start_char_pos": 1123, "end_char_pos": 1141}, {"type": "R", "before": "region proposal module propagates the", "after": "region-proposal module propagates", "start_char_pos": 1163, "end_char_pos": 1200}, {"type": "D", "before": "then", "after": null, "start_char_pos": 1244, "end_char_pos": 1248}, {"type": "R", "before": "box regression and binary mask generation modules.", "after": "box-regression and binary mask-generation modules. Our method effectively learns from unlabeled regions thereby improving detection performance. We test our method with various nuclear images.", "start_char_pos": 1300, "end_char_pos": 1350}, {"type": "D", "before": "the baseline pipeline gives frequent false negatives, while", "after": null, "start_char_pos": 1403, "end_char_pos": 1462}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1484, "end_char_pos": 1484}, {"type": "R", "before": "of training with the", "after": "from training with", "start_char_pos": 1523, "end_char_pos": 1543}, {"type": "R", "before": "a full annotation of a dataset, where annotations are iteratively generated and corrected until the", "after": "full annotations of datasets, iteratively generating and correcting annotations until a", "start_char_pos": 1631, "end_char_pos": 1730}], "sents_char_pos": [0, 147, 281, 394, 577, 652, 751, 878, 1016, 1143, 1350, 1565, 1779]} {"doc_id": "1908.00257", "revision_depth": "1", "before_revision": "Market dynamic is studied by quantifying the dependence of the entropy S(\\tau,n) of the clusters formed by the series of the prices p_t and its moving average p_{t,n} on temporal horizon M. We report results of the analysis\\par performed on high-frequency data of the Nasdaq Composite, Dow Jones Industrial Avg and Standard \\& Poor 500 indexes downloaded from the Bloomberg terminal www. bloomberg.com/professional. Both raw and sampled data series have been analysed for a broad range of horizons M, varying from one to twelve months over the year 2018. A systematic dependence of the cluster entropy function S(\\tau,n) on the horizon M has been evidenced in the analysed assets. Hence, the cluster entropy function is integrated over the cluster \\tau to yield a synthetic indicator of price evolution: the\\emph{Market Dynamic Index I(M,n) . Moreover, the\\emph{Market Horizon Dependence\\par defined as H(M,n)=I(M,n)-I(1,n) is calculated and compared with the values of the horizon dependence of the pricing kernel with different representative agent models obtained by a Kullback-Leibler entropy approach .", "after_revision": "Market dynamic is quantified in terms of the entropy S(\\tau,n) of the clusters formed by the intersections between the series of the prices p_t and the moving average p_{t,n} . The entropy S(\\tau,n) is defined according to Shannon as \\sum P(\\tau,n)\\log P(\\tau,n), with P(\\tau,n) the probability for the cluster to occur with duration \\tau.\\par The investigation is performed on high-frequency data of the Nasdaq Composite, Dow Jones Industrial Avg and Standard \\& Poor 500 indexes downloaded from the Bloomberg terminal . The cluster entropy S(\\tau,n) is analysed in raw and sampled data over a broad range of temporal horizons M varying from one to twelve months over the year 2018. The cluster entropy S(\\tau,n) is integrated over the cluster duration \\tau to yield \\emph{the Market Dynamic Index I(M,n), a synthetic figure of price dynamics. A systematic dependence of the cluster entropy S(\\tau,n) and the Market Dynamic Index I(M,n) \\emph{on the temporal horizon M is evidenced.\\par Finally, the Market Horizon Dependence , defined as H(M,n)=I(M,n)-I(1,n) , is compared with the horizon dependence of the pricing kernel with different representative agents obtained via a Kullback-Leibler entropy approach . The Market Horizon Dependence H(M,n) of the three assets is compared against the values obtained by implementing the cluster entropy S(\\tau,n) approach on artificially generated series (Fractional Brownian Motion) .", "edit_actions": [{"type": "R", "before": "studied by quantifying the dependence", "after": "quantified in terms", "start_char_pos": 18, "end_char_pos": 55}, {"type": "A", "before": null, "after": "intersections between the", "start_char_pos": 111, "end_char_pos": 111}, {"type": "R", "before": "its", "after": "the", "start_char_pos": 141, "end_char_pos": 144}, {"type": "R", "before": "on temporal horizon M. We report results of the analysis", "after": ". The entropy S(\\tau,n) is defined according to Shannon as \\sum P(\\tau,n)\\log P(\\tau,n), with P(\\tau,n) the probability for the cluster to occur with duration \\tau.", "start_char_pos": 168, "end_char_pos": 224}, {"type": "A", "before": null, "after": "The investigation is", "start_char_pos": 229, "end_char_pos": 229}, {"type": "R", "before": "www. bloomberg.com/professional. Both", "after": ". The cluster entropy S(\\tau,n) is analysed in", "start_char_pos": 385, "end_char_pos": 422}, {"type": "R", "before": "series have been analysed for", "after": "over", "start_char_pos": 444, "end_char_pos": 473}, {"type": "R", "before": "horizons M,", "after": "temporal horizons M", "start_char_pos": 491, "end_char_pos": 502}, {"type": "R", "before": "A systematic dependence of the cluster entropy function", "after": "The cluster entropy", "start_char_pos": 557, "end_char_pos": 612}, {"type": "D", "before": "on the horizon M has been evidenced in the analysed assets. Hence, the cluster entropy function", "after": null, "start_char_pos": 623, "end_char_pos": 718}, {"type": "A", "before": null, "after": "duration", "start_char_pos": 750, "end_char_pos": 750}, {"type": "D", "before": "a synthetic indicator of price evolution: the", "after": null, "start_char_pos": 765, "end_char_pos": 810}, {"type": "R", "before": "Market Dynamic Index", "after": "the Market Dynamic Index I(M,n), a synthetic figure of price dynamics. A systematic dependence of the cluster entropy S(\\tau,n) and the Market Dynamic Index", "start_char_pos": 816, "end_char_pos": 836}, {"type": "D", "before": ". Moreover, the", "after": null, "start_char_pos": 844, "end_char_pos": 859}, {"type": "R", "before": "Market Horizon Dependence", "after": "on the temporal horizon M is evidenced.", "start_char_pos": 865, "end_char_pos": 890}, {"type": "A", "before": null, "after": "Finally, the Market Horizon Dependence", "start_char_pos": 895, "end_char_pos": 895}, {"type": "A", "before": null, "after": ",", "start_char_pos": 896, "end_char_pos": 896}, {"type": "R", "before": "is calculated and", "after": ", is", "start_char_pos": 929, "end_char_pos": 946}, {"type": "D", "before": "values of the", "after": null, "start_char_pos": 965, "end_char_pos": 978}, {"type": "R", "before": "agent models obtained by", "after": "agents obtained via", "start_char_pos": 1050, "end_char_pos": 1074}, {"type": "A", "before": null, "after": ". The Market Horizon Dependence H(M,n) of the three assets is compared against the values obtained by implementing the cluster entropy S(\\tau,n) approach on artificially generated series (Fractional Brownian Motion)", "start_char_pos": 1111, "end_char_pos": 1111}], "sents_char_pos": [0, 190, 417, 556, 682]} {"doc_id": "1908.01406", "revision_depth": "3", "before_revision": "We study a class of tests of the randomness of Bernoulli sequences and their application to analyses of the human tendency to perceive streaks of consecutive successes as overly representative of positive dependence-the hot hand fallacy. In particular, we study tests of randomness (i.e., that trials are i.i.d.) based on test statistics that compare the proportion of successes that directly follow k consecutive successes with either the overall proportion of successes or the proportion of successes that directly follow k consecutive failures. We derive the asymptotic distributions of these test statistics and their permutation distributions under randomness, under a set of general stationary processes, and under a Markov model of \"streakiness\" , which allow us to evaluate their local asymptotic power. The results are applied to evaluate tests of randomness implemented on data from a basketball shooting experiment, whose conclusions are disputed by Gilovich, Vallone, and Tversky (1985) and Miller and Sanjurjo (2018a) . We establish that substantially larger data sets are required to derive an informative measurement of the deviation from randomness in basketball shooting. Although multiple testing procedures reveal that one shooter in the experiment exhibits a shooting pattern significantly inconsistent with randomness - supplying strong evidence that basketball shooting is not random for all shooters all of the time - we find that the evidence against randomness is limited to this shooter. Our results provide a mathematical and statistical foundation for the design and validation of experiments that directly compare deviations from randomness with human beliefs about deviations from randomness in Bernoulli sequences .", "after_revision": "We study a class of permutation tests of the randomness of a collection of Bernoulli sequences and their application to analyses of the human tendency to perceive streaks of consecutive successes as overly representative of positive dependence - the hot hand fallacy. In particular, we study permutation tests of the null hypothesis of randomness (i.e., that trials are i.i.d.) based on test statistics that compare the proportion of successes that directly follow k consecutive successes with either the overall proportion of successes or the proportion of successes that directly follow k consecutive failures. We characterize the asymptotic distributions of these test statistics and their permutation distributions under randomness, under a set of general stationary processes, and under a class of Markov chain alternatives , which allow us to derive their local asymptotic power. The results are applied to evaluate the empirical support for the hot hand fallacy provided by four controlled basketball shooting experiments . We establish that substantially larger data sets are required to derive an informative measurement of the deviation from randomness in basketball shooting. In one experiment for which we were able to obtain data, multiple testing procedures reveal that one shooter exhibits a shooting pattern significantly inconsistent with randomness - supplying strong evidence that basketball shooting is not random for all shooters all of the time . However, we find that the evidence against randomness in this experiment is limited to this shooter. Our results provide a mathematical and statistical foundation for the design and validation of experiments that directly compare deviations from randomness with human beliefs about deviations from randomness , and thereby constitute a direst test of the hot hand fallacy .", "edit_actions": [{"type": "A", "before": null, "after": "permutation", "start_char_pos": 20, "end_char_pos": 20}, {"type": "A", "before": null, "after": "a collection of", "start_char_pos": 48, "end_char_pos": 48}, {"type": "R", "before": "dependence-the", "after": "dependence - the", "start_char_pos": 207, "end_char_pos": 221}, {"type": "R", "before": "tests of", "after": "permutation tests of the null hypothesis of", "start_char_pos": 264, "end_char_pos": 272}, {"type": "R", "before": "derive", "after": "characterize", "start_char_pos": 553, "end_char_pos": 559}, {"type": "R", "before": "Markov model of \"streakiness\"", "after": "class of Markov chain alternatives", "start_char_pos": 725, "end_char_pos": 754}, {"type": "R", "before": "evaluate", "after": "derive", "start_char_pos": 775, "end_char_pos": 783}, {"type": "R", "before": "tests of randomness implemented on data from a basketball shooting experiment, whose conclusions are disputed by Gilovich, Vallone, and Tversky (1985) and Miller and Sanjurjo (2018a)", "after": "the empirical support for the hot hand fallacy provided by four controlled basketball shooting experiments", "start_char_pos": 850, "end_char_pos": 1032}, {"type": "R", "before": "Although", "after": "In one experiment for which we were able to obtain data,", "start_char_pos": 1191, "end_char_pos": 1199}, {"type": "D", "before": "in the experiment", "after": null, "start_char_pos": 1252, "end_char_pos": 1269}, {"type": "R", "before": "-", "after": ". However,", "start_char_pos": 1441, "end_char_pos": 1442}, {"type": "A", "before": null, "after": "in this experiment", "start_char_pos": 1488, "end_char_pos": 1488}, {"type": "R", "before": "in Bernoulli sequences", "after": ", and thereby constitute a direst test of the hot hand fallacy", "start_char_pos": 1725, "end_char_pos": 1747}], "sents_char_pos": [0, 239, 549, 813, 1034, 1190, 1516]} {"doc_id": "1908.06900", "revision_depth": "1", "before_revision": "Test automation can result in reduction in cost and human effort . If the optimal policy, the course of actions taken, for the intended objective in a testing process could be learnt by the testing system (e.g., a smart tester agent), then it could be reused in similar situations, thus leading to higher efficiency, i. e., less computational time. Automating stress testing to find performance breaking points remains a challenge for complex software systems. Common approaches are mainly based on source code or system model analysis or use-case based techniques. However, source code or system models might not be available at testing time. In this paper, we propose a self-adaptive fuzzy reinforcement learning-based performance (stress) testing framework (SaFReL) that enables the tester agent to learn the optimal policy for generating stress test cases leading to performance breaking point without access to performance model of the system under test. SaFReL learns the optimal policy through an initial learning , then reuses it during a transfer learning phase, while keeping the learning running in the long-term . Through multiple experiments on a simulated environment, we demonstrate that our approach generates the stress test cases for different programs efficiently and adaptively without access to performance models.", "after_revision": "Test automation brings the potential to reduce costs and human effort , but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points . Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase , then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term . Through multiple experiments on a simulated environment, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process, and performs adaptively without access to source code and performance models.", "edit_actions": [{"type": "R", "before": "can result in reduction in cost", "after": "brings the potential to reduce costs", "start_char_pos": 16, "end_char_pos": 47}, {"type": "R", "before": ". If the optimal policy, the course of actions taken, for the intended objective in a testing process could be learnt by the testing system (e.g., a smart tester agent), then it could be reused in similar situations, thus leading to higher efficiency, i. e., less computational time. Automating stress", "after": ", but several aspects of software testing remain challenging to automate. One such example is automated performance", "start_char_pos": 65, "end_char_pos": 366}, {"type": "R", "before": "remains a challenge for complex software systems. Common approaches are mainly based on", "after": ". Current approaches to tackle automated generation of performance test cases mainly involve using", "start_char_pos": 411, "end_char_pos": 498}, {"type": "R", "before": "or", "after": "and", "start_char_pos": 587, "end_char_pos": 589}, {"type": "A", "before": null, "after": "always", "start_char_pos": 614, "end_char_pos": 614}, {"type": "R", "before": "In this paper, we propose a", "after": "On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a", "start_char_pos": 645, "end_char_pos": 672}, {"type": "R", "before": "performance (stress) testing framework (SaFReL) that enables the tester agent to learn the optimal policy for generating stress test cases leading to performance breaking point without access to performance model of the system under test. SaFReL", "after": "performance testing framework. SaFReL", "start_char_pos": 722, "end_char_pos": 967}, {"type": "A", "before": null, "after": "to generate performance test cases", "start_char_pos": 994, "end_char_pos": 994}, {"type": "A", "before": null, "after": "phase", "start_char_pos": 1023, "end_char_pos": 1023}, {"type": "R", "before": "in the long-term", "after": "and updating the policy in the long term", "start_char_pos": 1110, "end_char_pos": 1126}, {"type": "R", "before": "stress", "after": "target performance", "start_char_pos": 1233, "end_char_pos": 1239}, {"type": "R", "before": "efficiently and", "after": "more efficiently than a typical testing process, and performs", "start_char_pos": 1274, "end_char_pos": 1289}, {"type": "A", "before": null, "after": "source code and", "start_char_pos": 1319, "end_char_pos": 1319}], "sents_char_pos": [0, 66, 348, 460, 565, 644, 960, 1128]} {"doc_id": "1908.07452", "revision_depth": "1", "before_revision": "We develop a framework that creates a new polygonal mesh representation of the 3D domain of a layer-by-layer 3D printing job on which we identify single, continuous tool paths covering each connected piece of the domain in every layer. We present a tool path algorithm that traverses each such continuous tool path with no crossovers. The key construction at the heart of our framework is a novel Euler transformation that we introduced recently in a separate manuscript. Our Euler transformation converts a 2-dimensional cell complex K into a new 2-complex K^ such that every vertex in the 1-skeleton G^ of K^ has degree 4. Hence G^ is Eulerian, and an Eulerian tour can be followed to print all edges in a continuous fashion without stops . We start with a mesh K of the union of polygons obtained by projecting all layers to the plane. First we compute its Euler transformation K^. In the slicing step, we clip K^ at each layer i using its polygon to obtain K^_i . We then patch K^_i by adding edges such that any odd-degree nodes created by slicing are transformed to have even degrees again. We print extra support edges in place of any segments left out to ensure there are no edges without support in the next layer above . These support edges maintain the Euler nature of K^_i. Finally , we describe a tree-based search algorithm that builds the continuous tool path by traversing \"concentric\" cycles in the Euler complex. Our algorithm produces a tool path that avoids material collisions and crossovers, and can be printed in a continuous fashion irrespective of complex geometry or topology of the domain (e.g., holes) .", "after_revision": "We develop a framework that creates a new polygonal mesh representation of the sparse infill domain of a layer-by-layer 3D printing job . We guarantee the existence of a single, continuous tool path covering each connected piece of the domain in every layer. We present a tool path algorithm that traverses each such continuous tool path with no crossovers. The key construction at the heart of our framework is an Euler transformation which converts a 2-dimensional cell complex K into a new 2-complex K^ such that every vertex in the 1-skeleton G^ of K^ has even degree. Hence G^ is Eulerian, and a Eulerian tour can be followed to print all edges in a continuous fashion . We start with a mesh K of the union of polygons obtained by projecting all layers to the plane. We compute its Euler transformation K^. In the slicing step, we clip K^ at each layer using its polygon to obtain a complex that may not necessarily be Euler . We then patch this complex by adding edges such that any odd-degree nodes created by slicing are transformed to have even degrees again. We print extra support edges in place of any segments left out to ensure there are no edges without support in the next layer . These support edges maintain the Euler nature of the complex. Finally we describe a tree-based search algorithm that builds the continuous tool path by traversing \"concentric\" cycles in the Euler complex. Our algorithm produces a tool path that avoids material collisions and crossovers, and can be printed in a continuous fashion irrespective of complex geometry or topology of the domain (e.g., holes) . We implement our test our framework on several 3D objects. Apart from standard geometric shapes, we demonstrate the framework on the Stanford bunny .", "edit_actions": [{"type": "R", "before": "3D", "after": "sparse infill", "start_char_pos": 79, "end_char_pos": 81}, {"type": "R", "before": "on which we identify", "after": ". We guarantee the existence of a", "start_char_pos": 125, "end_char_pos": 145}, {"type": "R", "before": "paths", "after": "path", "start_char_pos": 170, "end_char_pos": 175}, {"type": "R", "before": "a novel Euler transformation that we introduced recently in a separate manuscript. Our Euler transformation", "after": "an Euler transformation which", "start_char_pos": 389, "end_char_pos": 496}, {"type": "R", "before": "degree 4.", "after": "even degree.", "start_char_pos": 615, "end_char_pos": 624}, {"type": "R", "before": "an", "after": "a", "start_char_pos": 651, "end_char_pos": 653}, {"type": "D", "before": "without stops", "after": null, "start_char_pos": 727, "end_char_pos": 740}, {"type": "R", "before": "First we", "after": "We", "start_char_pos": 839, "end_char_pos": 847}, {"type": "D", "before": "i", "after": null, "start_char_pos": 931, "end_char_pos": 932}, {"type": "R", "before": "K^_i", "after": "a complex that may not necessarily be Euler", "start_char_pos": 961, "end_char_pos": 965}, {"type": "R", "before": "K^_i", "after": "this complex", "start_char_pos": 982, "end_char_pos": 986}, {"type": "D", "before": "above", "after": null, "start_char_pos": 1223, "end_char_pos": 1228}, {"type": "R", "before": "K^_i. Finally ,", "after": "the complex. Finally", "start_char_pos": 1280, "end_char_pos": 1295}, {"type": "A", "before": null, "after": ". We implement our test our framework on several 3D objects. Apart from standard geometric shapes, we demonstrate the framework on the Stanford bunny", "start_char_pos": 1630, "end_char_pos": 1630}], "sents_char_pos": [0, 235, 334, 471, 624, 742, 838, 884, 967, 1096, 1230, 1285, 1430]} {"doc_id": "1908.08474", "revision_depth": "1", "before_revision": "The Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. The Shapley value [ 1 ] is known to be the unique method that satisfies certain desirable properties , and this motivates its use. Unfortunately, despite this uniqueness result, there are a multiplicity of Shapley values used in explaining a model's prediction. This is because there are many ways to apply the Shapley value that differ in how they reference the model, the training data, and the explanation context. In this paper, we study an approach that applies the Shapley value to conditional expectations (CES) of sets of features (cf. 2%DIFDELCMD < ]%%% ) that subsumes several prior approaches within a common framework. We provide the first algorithm for the general version of CES. We show that CES can result in counterintuitive attributions in theory and in practice (we study a diabetes prediction task); for instance, CES can assign non-zero attributions to features that are not referenced by the model. In contrast, we show that an approach called the Baseline Shapley ( BS) does not exhibit counterintuitive attributions; we support this claim with a uniqueness (axiomatic) result. We show that BS is a special case of CES, and CES with an independent feature distribution coincides with a randomized version of BS. Thus, BS fits into the CES framework, but does not suffer from many of CES's deficiencies .", "after_revision": "The Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. The use of the Shapley value is justified by citing [ 16 ] showing that it is theunique method that satisfies certain good properties (axioms). There are, however, a multiplicity of ways in which the Shapley value is operationalized in the attribution problem. These differ in how they reference the model, the training data, and the explanation context. %DIFDELCMD < ]%%% These give very different results, rendering the uniqueness result meaningless. Furthermore, we find that previously proposed approaches can produce counterintuitive attributions in theory and in practice---for instance, they can assign non-zero attributions to features that are not even referenced by the model. In this paper, we use the axiomatic approach to study the differences between some of the many operationalizations of the Shapley value for attribution, and propose a technique called Baseline Shapley ( BShap) that is backed by a proper uniqueness result. We also contrast BShap with Integrated Gradients, another extension of Shapley value to the continuous setting .", "edit_actions": [{"type": "R", "before": "Shapley value", "after": "use of the Shapley value is justified by citing", "start_char_pos": 140, "end_char_pos": 153}, {"type": "R", "before": "1", "after": "16", "start_char_pos": 156, "end_char_pos": 157}, {"type": "R", "before": "is known to be the unique", "after": "showing that it is the", "start_char_pos": 160, "end_char_pos": 185}, {"type": "A", "before": null, "after": "unique", "start_char_pos": 185, "end_char_pos": 185}, {"type": "R", "before": "desirable properties , and this motivates its use. Unfortunately, despite this uniqueness result, there are", "after": "good properties (", "start_char_pos": 216, "end_char_pos": 323}, {"type": "A", "before": null, "after": "axioms", "start_char_pos": 323, "end_char_pos": 323}, {"type": "A", "before": null, "after": "). There are, however,", "start_char_pos": 323, "end_char_pos": 323}, {"type": "R", "before": "Shapley values used in explaining a model's prediction. This is because there are many ways to apply", "after": "ways in which", "start_char_pos": 342, "end_char_pos": 442}, {"type": "R", "before": "that", "after": "is operationalized in the attribution problem. These", "start_char_pos": 461, "end_char_pos": 465}, {"type": "D", "before": "In this paper, we study an approach that applies the Shapley value to conditional expectations (CES) of sets of features (cf.", "after": null, "start_char_pos": 554, "end_char_pos": 679}, {"type": "D", "before": "2", "after": null, "start_char_pos": 680, "end_char_pos": 681}, {"type": "R", "before": ") that subsumes several prior approaches within a common framework. We provide the first algorithm for the general version of CES. We show that CES can result in", "after": "These give very different results, rendering the uniqueness result meaningless. Furthermore, we find that previously proposed approaches can produce", "start_char_pos": 699, "end_char_pos": 860}, {"type": "R", "before": "practice (we study a diabetes prediction task); for instance, CES", "after": "practice---for instance, they", "start_char_pos": 908, "end_char_pos": 973}, {"type": "A", "before": null, "after": "even", "start_char_pos": 1032, "end_char_pos": 1032}, {"type": "R", "before": "contrast, we show that an approach called the", "after": "this paper, we use the axiomatic approach to study the differences between some of the many operationalizations of the Shapley value for attribution, and propose a technique called", "start_char_pos": 1061, "end_char_pos": 1106}, {"type": "R", "before": "BS) does not exhibit counterintuitive attributions; we support this claim with a uniqueness (axiomatic)", "after": "BShap) that is backed by a proper uniqueness", "start_char_pos": 1126, "end_char_pos": 1229}, {"type": "R", "before": "show that BS is a special case of CES, and CES with an independent feature distribution coincides with a randomized version of BS. Thus, BS fits into the CES framework, but does not suffer from many of CES's deficiencies", "after": "also contrast BShap with Integrated Gradients, another extension of Shapley value to the continuous setting", "start_char_pos": 1241, "end_char_pos": 1461}], "sents_char_pos": [0, 135, 266, 397, 553, 766, 829, 955, 1057, 1177, 1237, 1371]} {"doc_id": "1908.09853", "revision_depth": "1", "before_revision": "Ecologists have long suspected that species are more likely to interact if their traits match in a particular way. For example, a pollination interaction may be particularly likely if the proportions of a bee's tongue match flower shapein a beneficial way. Empirical evidence for trait matching , however, varies significantly in strength among different types of ecological networks. Here, we show that ambiguity among empirical trait matching studies may have arisen at least in parts from using overly simple statistical models. Using simulated and real data, we contrast conventional regression models with Machine Learning (ML) models (Random Forest, Boosted Regression Trees, Deep Neural Networks, Convolutional Neural Networks, Support Vector Machines, naive Bayes, and k-Nearest-Neighbor), testing their ability to predict species interactions based on traits, and infer trait combinations causally responsible for species interactions. We find that the best ML models can successfully predict species interactions in plant-pollinator networks (up to 0.93 AUC) and outperform conventional regression models . Our results also demonstrate that ML models can better identify the causally responsible trait matching combinations than GLMs. In two case studies, the best ML models could successfully predict species interactions in a global plant-pollinator database and infer ecologically plausible trait matching rules for a plant-hummingbird network from Costa Rica , without any prior assumptions about the system . We conclude that flexible ML models offer many advantages over traditional regression models for understanding interaction networks. We anticipate that these results extrapolate to other network types, such as trophic or competitive networks . More generally, our results highlight the potential of ML and artificial intelligence for inference beyond standard tasks such as pattern recognition.", "after_revision": "Ecologists have long suspected that species are more likely to interact if their traits match in a particular way. For example, a pollination interaction may be more likely if the proportions of a bee's tongue fit a plant's flower shape. Empirical estimates of the importance of trait-matching for determining species interactions , however, vary significantly among different types of ecological networks. Here, we show that ambiguity among empirical trait-matching studies may have arisen at least in parts from using overly simple statistical models. Using simulated and real data, we contrast conventional generalized linear models (GLM) with more flexible Machine Learning (ML) models (Random Forest, Boosted Regression Trees, Deep Neural Networks, Convolutional Neural Networks, Support Vector Machines, naive Bayes, and k-Nearest-Neighbor), testing their ability to predict species interactions based on traits, and infer trait combinations causally responsible for species interactions. We find that the best ML models can successfully predict species interactions in plant-pollinator networks , outperforming GLMs by a substantial margin . Our results also demonstrate that ML models can better identify the causally responsible trait-matching combinations than GLMs. In two case studies, the best ML models successfully predicted species interactions in a global plant-pollinator database and inferred ecologically plausible trait-matching rules for a plant-hummingbird network , without any prior assumptions . We conclude that flexible ML models offer many advantages over traditional regression models for understanding interaction networks. We anticipate that these results extrapolate to other ecological network types . More generally, our results highlight the potential of machine learning and artificial intelligence for inference in ecology, beyond standard tasks such as image or pattern recognition.", "edit_actions": [{"type": "R", "before": "particularly", "after": "more", "start_char_pos": 161, "end_char_pos": 173}, {"type": "R", "before": "match flower shapein a beneficial way. Empirical evidence for trait matching", "after": "fit a plant's flower shape. Empirical estimates of the importance of trait-matching for determining species interactions", "start_char_pos": 218, "end_char_pos": 294}, {"type": "R", "before": "varies significantly in strength", "after": "vary significantly", "start_char_pos": 306, "end_char_pos": 338}, {"type": "R", "before": "trait matching", "after": "trait-matching", "start_char_pos": 430, "end_char_pos": 444}, {"type": "R", "before": "regression models with", "after": "generalized linear models (GLM) with more flexible", "start_char_pos": 588, "end_char_pos": 610}, {"type": "R", "before": "(up to 0.93 AUC) and outperform conventional regression models", "after": ", outperforming GLMs by a substantial margin", "start_char_pos": 1052, "end_char_pos": 1114}, {"type": "R", "before": "trait matching", "after": "trait-matching", "start_char_pos": 1206, "end_char_pos": 1220}, {"type": "R", "before": "could successfully predict", "after": "successfully predicted", "start_char_pos": 1285, "end_char_pos": 1311}, {"type": "R", "before": "infer ecologically plausible trait matching", "after": "inferred ecologically plausible trait-matching", "start_char_pos": 1375, "end_char_pos": 1418}, {"type": "D", "before": "from Costa Rica", "after": null, "start_char_pos": 1457, "end_char_pos": 1472}, {"type": "D", "before": "about the system", "after": null, "start_char_pos": 1505, "end_char_pos": 1521}, {"type": "R", "before": "network types, such as trophic or competitive networks", "after": "ecological network types", "start_char_pos": 1711, "end_char_pos": 1765}, {"type": "R", "before": "ML", "after": "machine learning", "start_char_pos": 1823, "end_char_pos": 1825}, {"type": "A", "before": null, "after": "in ecology,", "start_char_pos": 1868, "end_char_pos": 1868}, {"type": "A", "before": null, "after": "image or", "start_char_pos": 1899, "end_char_pos": 1899}], "sents_char_pos": [0, 114, 256, 384, 531, 944, 1116, 1244, 1523, 1656, 1767]} {"doc_id": "1908.11436", "revision_depth": "1", "before_revision": "In a recent article, Zemblys et al. 1 reported on a new method for the classification of eye-movements using deep neural networks. I will refer to this paper as the \"gazeNet paper \" . I have found 2 errors and two problems with that paper that are explained herein. Error 1: The gazeNet machine-learning based \\textit{\\textbf{ classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz . Of the six datasets that were used as the training set for the gazeNet algorithm, 2 were actually collected at 200 Hz. Problem 1\\underline{\\textit{\\textbf{ has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 \\underline{\\textit{\\textbf{ is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 \\underline{\\textit{\\textbf{ arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving the kappa statistic higher, whereas unmatched events should be driving the kappa statistic lower.", "after_revision": " Zemblys et al. \\mbox{%DIFAUXCMD gazeNet reported on a method for the classification of eye-movements ( \"gazeNet \" ) . I have found 2 errors and two problems with that paper that are explained herein. \\textit{\\textbf{Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200Hz . Of the six datasets that were used as the training set for the gazeNet algorithm, 2 were actually collected at 200Hz.\\underline{\\textit{\\textbf{Problem 1 has to do with the fact that even among the 500Hz data, the inter-timestamp intervals varied widely. \\underline{\\textit{\\textbf{Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. \\underline{\\textit{\\textbf{Error 2 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.", "edit_actions": [{"type": "D", "before": "In a recent article,", "after": null, "start_char_pos": 0, "end_char_pos": 20}, {"type": "R", "before": "1", "after": "\\mbox{%DIFAUXCMD gazeNet", "start_char_pos": 36, "end_char_pos": 37}, {"type": "D", "before": "new", "after": null, "start_char_pos": 52, "end_char_pos": 55}, {"type": "R", "before": "using deep neural networks. I will refer to this paper as the", "after": "(", "start_char_pos": 103, "end_char_pos": 164}, {"type": "D", "before": "paper", "after": null, "start_char_pos": 174, "end_char_pos": 179}, {"type": "A", "before": null, "after": ")", "start_char_pos": 182, "end_char_pos": 182}, {"type": "D", "before": "Error 1: The gazeNet machine-learning based", "after": null, "start_char_pos": 267, "end_char_pos": 310}, {"type": "A", "before": null, "after": "Error 1:", "start_char_pos": 327, "end_char_pos": 327}, {"type": "A", "before": null, "after": "The gazeNet", "start_char_pos": 328, "end_char_pos": 328}, {"type": "R", "before": "200 Hz", "after": "200Hz", "start_char_pos": 517, "end_char_pos": 523}, {"type": "R", "before": "200 Hz. Problem 1", "after": "200Hz.", "start_char_pos": 637, "end_char_pos": 654}, {"type": "A", "before": null, "after": "Problem 1", "start_char_pos": 681, "end_char_pos": 681}, {"type": "R", "before": "500 Hz", "after": "500Hz", "start_char_pos": 726, "end_char_pos": 732}, {"type": "D", "before": "Problem 2", "after": null, "start_char_pos": 784, "end_char_pos": 793}, {"type": "A", "before": null, "after": "Problem 2", "start_char_pos": 821, "end_char_pos": 821}, {"type": "D", "before": "Error 2", "after": null, "start_char_pos": 1022, "end_char_pos": 1029}, {"type": "A", "before": null, "after": "Error 2", "start_char_pos": 1057, "end_char_pos": 1057}, {"type": "R", "before": "the kappa statistic", "after": "kappa", "start_char_pos": 1415, "end_char_pos": 1434}, {"type": "R", "before": "the kappa statistic", "after": "kappa", "start_char_pos": 1486, "end_char_pos": 1505}], "sents_char_pos": [0, 130, 266, 644, 783, 1021, 1147, 1306]} {"doc_id": "1909.03348", "revision_depth": "2", "before_revision": "We reveal the different interpretations of the future in their judgments of future economic conditions by applying weakly supervised learning and text mining. In the Economy Watcher Survey, which is a market survey published by the Japanese government, there are assessments of current and future economic conditions by people from various fields. Although this survey provides insights regarding an economic policy for policymakers in Japan, there is no clear definition of the future, in future economic conditions . Hence, in the survey , respondents make their assessments based on their interpretations of the future. In our research, we separate the assessments of future economic conditions into near and distant future economic conditions using learning from positive and unlabeled data (PU learning) , which is weakly supervised learning. The dataset is composed of several periods, and we develop a PU learning algorithm for efficient training, using the dataset with the time series. Through empirical analysis , we interpret the classification results from the viewpoint of economics .", "after_revision": "The Economy Watcher Survey, which is a market survey published by the Japanese government, contains assessments of current and future economic conditions by people from various fields. Although this survey provides insights regarding economic policy for policymakers , a clear definition of the word \"future\" in future economic conditions is not provided . Hence, the assessments respondents provide in the survey are simply based on their interpretations of the meaning of \"future.\" This motivated us to reveal the different interpretations of the future in their judgments of future economic conditions by applying weakly supervised learning and text mining. In our research, we separate the assessments of future economic conditions into economic conditions of the near and distant future using learning from positive and unlabeled data (PU learning) . Because the dataset includes data from several periods, we devised new architecture to enable neural networks to conduct PU learning based on the idea of multi-task learning to efficiently learn a classifier. Our empirical analysis confirmed that the proposed method could separate the future economic conditions, and we interpreted the classification results to obtain intuitions for policymaking .", "edit_actions": [{"type": "R", "before": "We reveal the different interpretations of the future in their judgments of future economic conditions by applying weakly supervised learning and text mining. In the", "after": "The", "start_char_pos": 0, "end_char_pos": 165}, {"type": "R", "before": "there are", "after": "contains", "start_char_pos": 253, "end_char_pos": 262}, {"type": "D", "before": "an", "after": null, "start_char_pos": 397, "end_char_pos": 399}, {"type": "R", "before": "in Japan, there is no", "after": ", a", "start_char_pos": 433, "end_char_pos": 454}, {"type": "R", "before": "future,", "after": "word \"future\"", "start_char_pos": 479, "end_char_pos": 486}, {"type": "A", "before": null, "after": "is not provided", "start_char_pos": 517, "end_char_pos": 517}, {"type": "A", "before": null, "after": "the assessments respondents provide", "start_char_pos": 527, "end_char_pos": 527}, {"type": "R", "before": ", respondents make their assessments", "after": "are simply", "start_char_pos": 542, "end_char_pos": 578}, {"type": "R", "before": "future.", "after": "meaning of \"future.\" This motivated us to reveal the different interpretations of the future in their judgments of future economic conditions by applying weakly supervised learning and text mining.", "start_char_pos": 617, "end_char_pos": 624}, {"type": "A", "before": null, "after": "economic conditions of the", "start_char_pos": 705, "end_char_pos": 705}, {"type": "D", "before": "economic conditions", "after": null, "start_char_pos": 730, "end_char_pos": 749}, {"type": "R", "before": ", which is weakly supervised learning. The dataset is composed of", "after": ". Because the dataset includes data from", "start_char_pos": 812, "end_char_pos": 877}, {"type": "D", "before": "and we develop a PU learning algorithm for efficient training, using the dataset with the time series. Through empirical analysis ,", "after": null, "start_char_pos": 895, "end_char_pos": 1026}, {"type": "R", "before": "interpret", "after": "devised new architecture to enable neural networks to conduct PU learning based on the idea of multi-task learning to efficiently learn a classifier. Our empirical analysis confirmed that the proposed method could separate the future economic conditions, and we interpreted", "start_char_pos": 1030, "end_char_pos": 1039}, {"type": "R", "before": "from the viewpoint of economics", "after": "to obtain intuitions for policymaking", "start_char_pos": 1067, "end_char_pos": 1098}], "sents_char_pos": [0, 158, 347, 519, 624, 850, 997]} {"doc_id": "1909.05953", "revision_depth": "1", "before_revision": " This paper deals with the following separability problem in 3D space: Given a rigid polyhedron P with n vertices , does a semi-rigid polyhedron G exist, such that both polyhedra can be transformed into an inseparable assembled state, where the fixture snaps on to P, by applying a linear force and exploiting the mild flexibility of G? If such a flexible snapping polyhedron exists, devise an efficient and robust algorithm that constructs it. In simple words, we are looking for s semi-rigid polyhedron G, such that when P and G are separate , we can push G towards P , slightly bending G on the way , and obtain a configuration, where G is back in its original shape , and both P and G are inseparable as rigid bodies. We define certain properties such a pair of polyhedron and its snapping fixture may possess, and prove two theorems related to the pair. We introduce an algorithm that produces a snapping fixture with such properties in O(n ^5) time , if a snapping fixture exists , and an efficient and robust implementation of this algorithm .", "after_revision": "Fixtures for constraining the movement of parts have been extensively investigated in robotics, since they are essential for using robots in automated manufacturing. This paper deals with the design and optimized synthesis of a special type of fixtures, which we callsnapping fixtures. Given a polyhedral workpiece P with n vertices and of constant genus, which we need to hold, a snapping fixture is a semi-rigid polyhedron G , made of a palm and several fingers, such that when P and G are well separated , we can push P toward G , slightly bending the fingers of G on the way (exploiting its mild flexibility) , and obtain a configuration, where G is back in its original shape and P and G are inseparable as rigid bodies. We prove the minimal closure conditions under which such fixtures can hold parts, using Helly's theorem. We then introduce an algorithm running in O(n ^3) time that produces a snapping fixture, minimizing the number of fingers and optimizing additional objectives, if a snapping fixture exists . We also provide an efficient and robust implementation of a simpler version of the algorithm, which produces the fixture model to be 3D printed and runs in O(n^4) time. We describe two applications with different optimization criteria: Fixtures to hold add-ons for drones, where we aim to make the fixture as lightweight as possible, and small-scale fixtures to hold precious stones in jewelry, where we aim to maximize the exposure of the stones, namely minimize the obscuring of the workpiece by the fixture .", "edit_actions": [{"type": "A", "before": null, "after": "Fixtures for constraining the movement of parts have been extensively investigated in robotics, since they are essential for using robots in automated manufacturing.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "following separability problem in 3D space: Given a rigid polyhedron", "after": "design and optimized synthesis of a special type of fixtures, which we call", "start_char_pos": 27, "end_char_pos": 95}, {"type": "A", "before": null, "after": "snapping fixtures", "start_char_pos": 95, "end_char_pos": 95}, {"type": "A", "before": null, "after": ". Given a polyhedral workpiece", "start_char_pos": 95, "end_char_pos": 95}, {"type": "R", "before": ", does a", "after": "and of constant genus, which we need to hold, a snapping fixture is a", "start_char_pos": 114, "end_char_pos": 122}, {"type": "R", "before": "exist, such that both polyhedra can be transformed into an inseparable assembled state, where the fixture snaps on to P, by applying a linear force and exploiting the mild flexibility of G? If such a flexible snapping polyhedron exists, devise an efficient and robust algorithm that constructs it. In simple words, we are looking for s semi-rigid polyhedron G,", "after": ", made of a palm and several fingers,", "start_char_pos": 147, "end_char_pos": 507}, {"type": "R", "before": "separate", "after": "well separated", "start_char_pos": 535, "end_char_pos": 543}, {"type": "R", "before": "G towards P", "after": "P toward G", "start_char_pos": 558, "end_char_pos": 569}, {"type": "A", "before": null, "after": "the fingers of", "start_char_pos": 589, "end_char_pos": 589}, {"type": "A", "before": null, "after": "(exploiting its mild flexibility)", "start_char_pos": 603, "end_char_pos": 603}, {"type": "R", "before": ", and both", "after": "and", "start_char_pos": 672, "end_char_pos": 682}, {"type": "R", "before": "define certain properties such a pair of polyhedron and its snapping fixture may possess, and prove two theorems related to the pair. We", "after": "prove the minimal closure conditions under which such fixtures can hold parts, using Helly's theorem. We then", "start_char_pos": 727, "end_char_pos": 863}, {"type": "R", "before": "that produces a snapping fixture with such properties", "after": "running", "start_char_pos": 887, "end_char_pos": 940}, {"type": "R", "before": "^5) time ,", "after": "^3) time that produces a snapping fixture, minimizing the number of fingers and optimizing additional objectives,", "start_char_pos": 948, "end_char_pos": 958}, {"type": "R", "before": ", and", "after": ". We also provide", "start_char_pos": 988, "end_char_pos": 993}, {"type": "R", "before": "this algorithm", "after": "a simpler version of the algorithm, which produces the fixture model to be 3D printed and runs in O(n^4) time. We describe two applications with different optimization criteria: Fixtures to hold add-ons for drones, where we aim to make the fixture as lightweight as possible, and small-scale fixtures to hold precious stones in jewelry, where we aim to maximize the exposure of the stones, namely minimize the obscuring of the workpiece by the fixture", "start_char_pos": 1036, "end_char_pos": 1050}], "sents_char_pos": [0, 444, 723, 860]} {"doc_id": "1909.06527", "revision_depth": "2", "before_revision": "We focus on the problem of designing an artificial agent , capable of assisting a human user to complete a task. Our goal is to guide human users towards optimal task performance while keeping their cognitive load as low as possible. Our insight is that in order to do so , we should develop an understanding of human decision making for the task domain . In this work, we consider the domain of collaborative packing, and as a first step, we explore the mechanisms underlying human packing strategies. Specifically, we conducted a user study in which 100 human participants completed a series of packing tasks in a virtual environment. We analyzed their packing strategies and discovered that they exhibit specific spatial and temporal patterns (e.g., humans tend to place larger items into corners first) . We expect that imbuing an artificial agent with an understanding of such a spatiotemporal structure will enable improved assistance, which will be reflected in the task performance and human perception of the artificial agent . Ongoing work involves the development of a framework that incorporates the extracted insights to predict and manipulate human decision making towards an efficient trajectory of low cognitive load . A follow-up study will evaluate our framework against a set of baselines featuring distinct strategies of assistance. Our eventual goal is the deployment and evaluation of our framework on an autonomous robotic manipulator, actively assisting users on a packing task.", "after_revision": "We focus on the problem of designing an artificial agent (AI) , capable of assisting a human user to complete a task. Our goal is to guide human users towards optimal task performance while keeping their cognitive load as low as possible. Our insight is that doing so requires an understanding of human decision making for the task domain at hand . In this work, we consider the domain of collaborative packing, in which an AI agent provides placement recommendations to a human user. As a first step, we explore the mechanisms underlying human packing strategies. We conducted a user study in which 100 human participants completed a series of packing tasks in a virtual environment. We analyzed their packing strategies and discovered spatial and temporal patterns , such as that humans tend to place larger items at corners first . We expect that imbuing an artificial agent with an understanding of this spatiotemporal structure will enable improved assistance, which will be reflected in the task performance and the human perception of the AI . Ongoing work involves the development of a framework that incorporates the extracted insights to predict and manipulate human decision making towards an efficient trajectory of low cognitive load and high efficiency . A follow-up study will evaluate our framework against a set of baselines featuring alternative strategies of assistance. Our eventual goal is the deployment and evaluation of our framework on an autonomous robotic manipulator, actively assisting users on a packing task.", "edit_actions": [{"type": "A", "before": null, "after": "(AI)", "start_char_pos": 57, "end_char_pos": 57}, {"type": "R", "before": "in order to do so , we should develop", "after": "doing so requires", "start_char_pos": 255, "end_char_pos": 292}, {"type": "A", "before": null, "after": "at hand", "start_char_pos": 355, "end_char_pos": 355}, {"type": "R", "before": "and as a", "after": "in which an AI agent provides placement recommendations to a human user. As a", "start_char_pos": 421, "end_char_pos": 429}, {"type": "R", "before": "Specifically, we", "after": "We", "start_char_pos": 505, "end_char_pos": 521}, {"type": "D", "before": "that they exhibit specific", "after": null, "start_char_pos": 691, "end_char_pos": 717}, {"type": "R", "before": "(e.g.,", "after": ", such as that", "start_char_pos": 748, "end_char_pos": 754}, {"type": "R", "before": "into corners first)", "after": "at corners first", "start_char_pos": 789, "end_char_pos": 808}, {"type": "R", "before": "such a", "after": "this", "start_char_pos": 879, "end_char_pos": 885}, {"type": "A", "before": null, "after": "the", "start_char_pos": 996, "end_char_pos": 996}, {"type": "R", "before": "artificial agent", "after": "AI", "start_char_pos": 1021, "end_char_pos": 1037}, {"type": "A", "before": null, "after": "and high efficiency", "start_char_pos": 1236, "end_char_pos": 1236}, {"type": "R", "before": "distinct", "after": "alternative", "start_char_pos": 1322, "end_char_pos": 1330}], "sents_char_pos": [0, 113, 234, 357, 504, 638, 810, 1039, 1238, 1356]} {"doc_id": "1909.10139", "revision_depth": "1", "before_revision": "Prosocial behavior, which means paying a cost for others to receive a benefit, is encountered in the donation game, the prisoner's dilemma, relaxed social dilemmas, and public goods games. Many studies of prosociality assume that the population structure is either homogeneous, meaning all individuals have the same number of interaction partners, or that the social good is of one particular type. Here, we study general evolutionary dynamics for arbitrary kinds of social goods. We find that heterogeneous population structures, where some individuals have many more interaction partners than others, are extremely conducive for the evolution of prosocial behaviors. Furthermore, prosocial behaviors can evolve that accumulate most of the benefit in a few highly connected nodes while many peripheral nodes receive low or negative payoff. Surprisingly, prosociality can evolve even if the total costs exceed the total benefits. Therefore, the highly heterogeneous interaction structure of human society, which is augmented by the internet, strongly promotes the emergence of prosocial behaviors but also creates the possibility of generating tremendous inequality.", "after_revision": "Prosocial behaviors are encountered in the donation game, the prisoner's dilemma, relaxed social dilemmas, and public goods games. Many studies assume that the population structure is homogeneous, meaning all individuals have the same number of interaction partners, or that the social good is of one particular type. Here, we explore general evolutionary dynamics for arbitrary spatial structures and social goods. We find that heterogeneous networks, wherein some individuals have many more interaction partners than others, can enhance the evolution of prosocial behaviors. However, they often accumulate most of the benefits in the hands of a few highly-connected individuals, while many others receive low or negative payoff. Surprisingly, selection can favor producers of social goods even if the total costs exceed the total benefits. In summary, heterogeneous structures have the ability to strongly promote the emergence of prosocial behaviors , but they also create the possibility of generating large inequality.", "edit_actions": [{"type": "R", "before": "behavior, which means paying a cost for others to receive a benefit, is", "after": "behaviors are", "start_char_pos": 10, "end_char_pos": 81}, {"type": "D", "before": "of prosociality", "after": null, "start_char_pos": 202, "end_char_pos": 217}, {"type": "D", "before": "either", "after": null, "start_char_pos": 258, "end_char_pos": 264}, {"type": "R", "before": "study", "after": "explore", "start_char_pos": 408, "end_char_pos": 413}, {"type": "R", "before": "kinds of", "after": "spatial structures and", "start_char_pos": 458, "end_char_pos": 466}, {"type": "R", "before": "population structures, where", "after": "networks, wherein", "start_char_pos": 508, "end_char_pos": 536}, {"type": "R", "before": "are extremely conducive for", "after": "can enhance", "start_char_pos": 603, "end_char_pos": 630}, {"type": "R", "before": "Furthermore, prosocial behaviors can evolve that", "after": "However, they often", "start_char_pos": 669, "end_char_pos": 717}, {"type": "R", "before": "benefit in a few highly connected nodes while many peripheral nodes", "after": "benefits in the hands of a few highly-connected individuals, while many others", "start_char_pos": 741, "end_char_pos": 808}, {"type": "R", "before": "prosociality can evolve", "after": "selection can favor producers of social goods", "start_char_pos": 855, "end_char_pos": 878}, {"type": "R", "before": "Therefore, the highly heterogeneous interaction structure of human society, which is augmented by the internet, strongly promotes", "after": "In summary, heterogeneous structures have the ability to strongly promote", "start_char_pos": 930, "end_char_pos": 1059}, {"type": "R", "before": "but also creates", "after": ", but they also create", "start_char_pos": 1097, "end_char_pos": 1113}, {"type": "R", "before": "tremendous", "after": "large", "start_char_pos": 1144, "end_char_pos": 1154}], "sents_char_pos": [0, 188, 398, 480, 668, 840, 929]} {"doc_id": "1909.11211", "revision_depth": "1", "before_revision": "The menstrual cycle is a key indicator of overall health for women of reproductive age. Previously, the study of women's menstruation was done primarily through survey results; however, as mobile apps for menstrual tracking become more widely adopted, they provide an increasingly large, content-rich source of menstrual health experiences and behaviors over time. In this paper, we show that self-reported data from menstrual trackers can reveal statistically significant relationships between per-person variability of cycle length and self-reported qualitative symptoms. Specifically, we explore a database collected using the Clue app by Biowink GmbH of over 378,000 users and 4.9 million natural cycles of user-tracked observations across a wide range of categories to better understand variation of menstrual experience within and across individuals. A concern for self-tracked data is that these reflect not only physiological behaviors, but also the engagement dynamics of app users. We mitigate such potential artifacts by developing a procedure to exclude cycles lacking user engagement, thereby allowing us to better distinguish true menstrual patterns from tracking anomalies. We find that women located at different ends of the spectrum of menstrual patterns , based on the consistency of their cycle length statistics, exhibit statistically significant differences in cycle characteristics and symptom tracking . Our findings showcase the potential of longitudinal, high-resolution self-tracked data for an improved understanding of menstruation and women's health as a whole.", "after_revision": "The menstrual cycle is a key indicator of overall health for women of reproductive age. Previously, menstruation was primarily studied through survey results; however, as menstrual tracking mobile apps become more widely adopted, they provide an increasingly large, content-rich source of menstrual health experiences and behaviors over time. By exploring a database of user-tracked observations from the Clue app by BioWink of over 378,000 users and 4.9 million natural cycles, we show that self-reported menstrual tracker data can reveal statistically significant relationships between per-person cycle length variability and self-reported qualitative symptoms. A concern for self-tracked data is that they reflect not only physiological behaviors, but also the engagement dynamics of app users. To mitigate such potential artifacts , we develop a procedure to exclude cycles lacking user engagement, thereby allowing us to better distinguish true menstrual patterns from tracking anomalies. We uncover that women located at different ends of the menstrual variability spectrum , based on the consistency of their cycle length statistics, exhibit statistically significant differences in their cycle characteristics and symptom tracking patterns. We also find that cycle and period length statistics are stationary over the app usage timeline across the variability spectrum. The symptoms that we identify as showing statistically significant association with timing data can be useful to clinicians and users for predicting cycle variability from symptoms or as potential health indicators for conditions like endometriosis. Our findings showcase the potential of longitudinal, high-resolution self-tracked data to improve understanding of menstruation and women's health as a whole.", "edit_actions": [{"type": "R", "before": "the study of women's menstruation was done primarily", "after": "menstruation was primarily studied", "start_char_pos": 100, "end_char_pos": 152}, {"type": "R", "before": "mobile apps for menstrual tracking", "after": "menstrual tracking mobile apps", "start_char_pos": 189, "end_char_pos": 223}, {"type": "R", "before": "In this paper,", "after": "By exploring a database of user-tracked observations from the Clue app by BioWink of over 378,000 users and 4.9 million natural cycles,", "start_char_pos": 365, "end_char_pos": 379}, {"type": "R", "before": "data from menstrual trackers", "after": "menstrual tracker data", "start_char_pos": 407, "end_char_pos": 435}, {"type": "R", "before": "variability of cycle length", "after": "cycle length variability", "start_char_pos": 506, "end_char_pos": 533}, {"type": "D", "before": "Specifically, we explore a database collected using the Clue app by Biowink GmbH of over 378,000 users and 4.9 million natural cycles of user-tracked observations across a wide range of categories to better understand variation of menstrual experience within and across individuals.", "after": null, "start_char_pos": 574, "end_char_pos": 856}, {"type": "R", "before": "these", "after": "they", "start_char_pos": 897, "end_char_pos": 902}, {"type": "R", "before": "We", "after": "To", "start_char_pos": 992, "end_char_pos": 994}, {"type": "R", "before": "by developing", "after": ", we develop", "start_char_pos": 1029, "end_char_pos": 1042}, {"type": "R", "before": "find", "after": "uncover", "start_char_pos": 1192, "end_char_pos": 1196}, {"type": "R", "before": "spectrum of menstrual patterns", "after": "menstrual variability spectrum", "start_char_pos": 1241, "end_char_pos": 1271}, {"type": "A", "before": null, "after": "their", "start_char_pos": 1382, "end_char_pos": 1382}, {"type": "R", "before": ".", "after": "patterns. We also find that cycle and period length statistics are stationary over the app usage timeline across the variability spectrum. The symptoms that we identify as showing statistically significant association with timing data can be useful to clinicians and users for predicting cycle variability from symptoms or as potential health indicators for conditions like endometriosis.", "start_char_pos": 1426, "end_char_pos": 1427}, {"type": "R", "before": "for an improved", "after": "to improve", "start_char_pos": 1515, "end_char_pos": 1530}], "sents_char_pos": [0, 87, 176, 364, 573, 856, 991, 1188, 1427]} {"doc_id": "1909.12449", "revision_depth": "1", "before_revision": "Feedback of calcium signaling through calcium-activated potassium channels of a dendritic spine is investigated. To simulate such ion channels and the resulting spatial distribution of concentration, current, and membrane voltage within the dendritic spine , we applythe immersed boundary method with electrodiffusion . In this simulation method , the permeability to ion flow across the membrane is regulated by the amplitude of chemical potential barriers along the membrane . With spatially localized ion channels, chemical potential barriers are locally and stochastically regulated. This represents the ion channel gating with multiple subunits, the open and closed states of which are governed by a continuous-time Markov process. The model simulation recapitulates an inhibitory action on voltage-sensitive calcium channels by calcium-activated potassium channels in a stochastic manner with non-local feedback loop. The model also predicts higher calcium influx with more closely placed channel complexes, resolving a potential mechanism of differential calcium handling by \\emph{locality of channel distribution . This work provides a foundation for future computer simulation studies of dendritic spine motility and structural plasticity.", "after_revision": "We investigate calcium signaling feedback through calcium-activated potassium channels of a dendritic spine by applying the immersed boundary method with electrodiffusion. We simulate the stochastic gating of such ion channels and the resulting spatial distribution of concentration, current, and membrane voltage within the dendritic spine . In this simulation , the permeability to ionic flow across the membrane is regulated by the amplitude of chemical potential barriers . With spatially localized ion channels, chemical potential barriers are locally and stochastically regulated. This regulation represents the ion channel gating with multiple subunits, the open and closed states governed by a continuous-time Markov process. The model simulation recapitulates an inhibitory action on voltage-sensitive calcium channels by the calcium-activated potassium channels in a stochastic manner as a non-local feedback loop. The model predicts amplified calcium influx with more closely placed channel complexes, proposing a potential mechanism of differential calcium handling by \\emph{ channel distributions . This work provides a foundation for future computer simulation studies of dendritic spine motility and structural plasticity.", "edit_actions": [{"type": "R", "before": "Feedback of calcium signaling", "after": "We investigate calcium signaling feedback", "start_char_pos": 0, "end_char_pos": 29}, {"type": "R", "before": "is investigated. To simulate", "after": "by applying the immersed boundary method with electrodiffusion. We simulate the stochastic gating of", "start_char_pos": 96, "end_char_pos": 124}, {"type": "D", "before": ", we apply", "after": null, "start_char_pos": 257, "end_char_pos": 267}, {"type": "D", "before": "the immersed boundary method with electrodiffusion", "after": null, "start_char_pos": 267, "end_char_pos": 317}, {"type": "D", "before": "method", "after": null, "start_char_pos": 339, "end_char_pos": 345}, {"type": "R", "before": "ion", "after": "ionic", "start_char_pos": 368, "end_char_pos": 371}, {"type": "D", "before": "along the membrane", "after": null, "start_char_pos": 458, "end_char_pos": 476}, {"type": "A", "before": null, "after": "regulation", "start_char_pos": 593, "end_char_pos": 593}, {"type": "D", "before": "of which are", "after": null, "start_char_pos": 679, "end_char_pos": 691}, {"type": "A", "before": null, "after": "the", "start_char_pos": 835, "end_char_pos": 835}, {"type": "R", "before": "with", "after": "as a", "start_char_pos": 896, "end_char_pos": 900}, {"type": "R", "before": "also predicts higher", "after": "predicts amplified", "start_char_pos": 936, "end_char_pos": 956}, {"type": "R", "before": "resolving", "after": "proposing", "start_char_pos": 1016, "end_char_pos": 1025}, {"type": "D", "before": "locality", "after": null, "start_char_pos": 1090, "end_char_pos": 1098}, {"type": "R", "before": "of channel distribution", "after": "channel distributions", "start_char_pos": 1099, "end_char_pos": 1122}], "sents_char_pos": [0, 112, 319, 478, 587, 737, 925, 1124]} {"doc_id": "1909.12578", "revision_depth": "2", "before_revision": "We study a financial market where the risky asset is modelled by a Brownian motion driven stochastic differential equation with a local time in the drift term.This models a situation where the asset price is partially controlled by a company which intervenes when the price is reaching a certain lower barrier in order to prevent it from going below that barrier . See e.g. Jarrow \\& Protter \\mbox{%DIFAUXCMD JP for an explanation and discussion of this model . As already pointed out by Karatzas \\& Shreve \\mbox{%DIFAUXCMD KS0pt%DIFAUXCMD (see also Jarrow \\& Protter \\mbox{%DIFAUXCMD JP }0pt%DIFAUXCMD )} } this allows for arbitrages in the market. The model is also relevant for high frequency trading issues. See e.g. Lachapelle\\textit{et al \\mbox{%DIFAUXCMD \\cite{LLLL }\\hspace{0pt}%DIFAUXCMD and the references therein} . In this paper we consider the case when there is a delay \\theta> 0 in the information flow available for the trader. Using white noise calculus we compute explicitly the optimal consumption rate and portfolio in this case and we show that the maximal value is finite as long as \\theta> 0. This implies that there is no arbitrage in the market in that case. However, when \\theta goes to 0, the value goes to infinity. This is in agreement with the above result that is an arbitrage when there is no delay } .", "after_revision": "We study a financial market where the risky asset is modelled by a geometric It\\^o-L\\'evy process, with a singular drift term.This can for example model a situation where the asset price is partially controlled by a company which intervenes when the price is reaching a certain lower barrier . See e.g. Jarrow Protter JP for an explanation and discussion of this model in the Brownian motion case . As already pointed out by Karatzas 0pt%DIFAUXCMD (see also Jarrow \\& Protter \\mbox{%DIFAUXCMD JP }0pt%DIFAUXCMD )} Shreve KS} (in the Brownian motion case), this allows for arbitrages in the market. \\textit{ }\\hspace{0pt}%DIFAUXCMD and the references therein} However, the situation in the case of jumps is not clear. Moreover, it is not clear what happens if there is a delay in the system . In this paper we consider a jump diffusion market model with a singular drift term modelled as the local time of a given process, and with a delay \\theta> 0 in the information flow available for the trader. Using white noise calculus we compute explicitly the optimal consumption rate and portfolio in this case and we show that the maximal value is finite as long as \\theta> 0. This implies that there is no arbitrage in the market in that case. However, when \\theta goes to 0, the value goes to infinity. This is in agreement with the above result that is an arbitrage when there is no delay . Our model is also relevant for high frequency trading issues. See e.g. Lachapelle et al LLLL} and the references therein .", "edit_actions": [{"type": "R", "before": "Brownian motion driven stochastic differential equation with a local time in the", "after": "geometric It\\^o-L\\'evy process, with a singular", "start_char_pos": 67, "end_char_pos": 147}, {"type": "R", "before": "models", "after": "can for example model", "start_char_pos": 164, "end_char_pos": 170}, {"type": "D", "before": "in order to prevent it from going below that barrier", "after": null, "start_char_pos": 310, "end_char_pos": 362}, {"type": "R", "before": "\\& Protter \\mbox{%DIFAUXCMD JP", "after": "Protter", "start_char_pos": 381, "end_char_pos": 411}, {"type": "A", "before": null, "after": "JP", "start_char_pos": 412, "end_char_pos": 412}, {"type": "A", "before": null, "after": "in the Brownian motion case", "start_char_pos": 461, "end_char_pos": 461}, {"type": "D", "before": "\\& Shreve \\mbox{%DIFAUXCMD KS", "after": null, "start_char_pos": 499, "end_char_pos": 528}, {"type": "A", "before": null, "after": "Shreve", "start_char_pos": 608, "end_char_pos": 608}, {"type": "A", "before": null, "after": "KS", "start_char_pos": 609, "end_char_pos": 609}, {"type": "A", "before": null, "after": "(in the Brownian motion case),", "start_char_pos": 611, "end_char_pos": 611}, {"type": "D", "before": "The model is also relevant for high frequency trading issues. See e.g. Lachapelle", "after": null, "start_char_pos": 654, "end_char_pos": 735}, {"type": "D", "before": "et al", "after": null, "start_char_pos": 743, "end_char_pos": 748}, {"type": "D", "before": "\\mbox{%DIFAUXCMD \\cite{LLLL", "after": null, "start_char_pos": 749, "end_char_pos": 776}, {"type": "A", "before": null, "after": "However, the situation in the case of jumps is not clear. Moreover, it is not clear what happens if there is a delay in the system", "start_char_pos": 829, "end_char_pos": 829}, {"type": "R", "before": "the case when there is a", "after": "a jump diffusion market model with a singular drift term modelled as the local time of a given process, and with a", "start_char_pos": 858, "end_char_pos": 882}, {"type": "A", "before": null, "after": ". Our model is also relevant for high frequency trading issues. See e.g. Lachapelle et al", "start_char_pos": 1336, "end_char_pos": 1336}, {"type": "A", "before": null, "after": "LLLL", "start_char_pos": 1337, "end_char_pos": 1337}, {"type": "A", "before": null, "after": "and the references therein", "start_char_pos": 1339, "end_char_pos": 1339}], "sents_char_pos": [0, 159, 364, 463, 653, 715, 831, 948, 1120, 1188, 1248]} {"doc_id": "1910.01490", "revision_depth": "1", "before_revision": " Hutchinson, Lo and Poggio raised the question that if learning works can learn the Black-Scholes formula, and they proposed the network mapping the ratio of underlying price to strike S_t/K and the\\to time to maturity \\tau directly into the ratio of option priceto strike C_t/K . In this paper we propose a novel descision function and study the network mapping S_t/K and%DIFDELCMD < \\tauinto %%% the ratio of time value to strike\\to V_t/K . Time values' appearance in artificial intelligence fits into traders' natural intelligence. Empirical experiments will be carried out to demonstrate that it significantly improves Hutchinson-Lo-Poggio's original model by faster learning and better generalization performance. In order to take a conceptual viewpoint and to prove that V_t/K but not C_t/K can be approximated by superpositions of logistic functions on its domain of definition, we work on the theory of universal approximation on unbounded domains. We prove some general results which imply that an artificial neural network with a single hidden layer and sigmoid activation represents no function in L^{p(}%DIFDELCMD < \\RR%%% ^2 \\times 0, 1%DIFDELCMD < ]%%% ^{n neural network with a single hidden layer and logistic activation is a universal approximator of L^{2}( %DIFDELCMD < \\RR %%% \\times [0, 1] ^{n^+ \\times } [0, 1] ^{n .", "after_revision": "The mathematical theory of neural networks is ``meant to tell us what is possible and, sometimes equally importantly, what is not.\" This paper contributes a case that fits into this principle. We propose that \"option price or time value\" is a natural hyperparameter in the design of neural network option models. Hutchinson, Lo and Poggio asked the question that if learning networks can learn the Black-Scholes formula, and they studied the network ( S_t/K , \\tau)\\to C_t/K where S_t, K, \\tau, C_t are the underlying price, strike, time to maturity and option price . In this paper we propose a novel decision function and study the network ( S_t/K %DIFDELCMD < \\tauinto %%% , \\tau)\\to V_t/K where V_t is the time value. Empirical experiments will be carried out to demonstrate that this new decision function significantly improves Hutchinson-Lo-Poggio's model by faster learning and better generalization performance. (}%DIFDELCMD < \\RR%%% %DIFDELCMD < ]%%% We prove that a shallow neural network with the logistic activation is a universal approximator in L^{2}( %DIFDELCMD < \\RR %%% \\mathbb{R \\times [0, 1] ). As a corollary V_t/K but not C_t/K can be approximated by superpositions of logistic functions on \\mathbb{R^+ \\times } [0, 1] . This justifies the benefit of time value oriented decision functions in option pricing models .", "edit_actions": [{"type": "A", "before": null, "after": "The mathematical theory of neural networks is ``meant to tell us what is possible and, sometimes equally importantly, what is not.\" This paper contributes a case that fits into this principle. We propose that \"option price or time value\" is a natural hyperparameter in the design of neural network option models.", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "raised", "after": "asked", "start_char_pos": 27, "end_char_pos": 33}, {"type": "R", "before": "works", "after": "networks", "start_char_pos": 64, "end_char_pos": 69}, {"type": "R", "before": "proposed the network mapping the ratio of underlying price to strike", "after": "studied the network (", "start_char_pos": 116, "end_char_pos": 184}, {"type": "R", "before": "and the", "after": ", \\tau)", "start_char_pos": 191, "end_char_pos": 198}, {"type": "A", "before": null, "after": "C_t/K where S_t, K, \\tau, C_t are the underlying price, strike,", "start_char_pos": 202, "end_char_pos": 202}, {"type": "R", "before": "\\tau directly into the ratio of option priceto strike C_t/K", "after": "and option price", "start_char_pos": 220, "end_char_pos": 279}, {"type": "R", "before": "descision", "after": "decision", "start_char_pos": 315, "end_char_pos": 324}, {"type": "R", "before": "mapping", "after": "(", "start_char_pos": 356, "end_char_pos": 363}, {"type": "D", "before": "and", "after": null, "start_char_pos": 370, "end_char_pos": 373}, {"type": "R", "before": "the ratio of time value to strike", "after": ", \\tau)", "start_char_pos": 399, "end_char_pos": 432}, {"type": "R", "before": ". Time values' appearance in artificial intelligence fits into traders' natural intelligence.", "after": "where V_t is the time value.", "start_char_pos": 442, "end_char_pos": 535}, {"type": "R", "before": "it", "after": "this new decision function", "start_char_pos": 598, "end_char_pos": 600}, {"type": "D", "before": "original", "after": null, "start_char_pos": 647, "end_char_pos": 655}, {"type": "D", "before": "In order to take a conceptual viewpoint and to prove that V_t/K but not C_t/K can be approximated by superpositions of logistic functions on its domain of definition, we work on the theory of universal approximation on unbounded domains. We prove some general results which imply that an artificial neural network with a single hidden layer and sigmoid activation represents no function in L^{p", "after": null, "start_char_pos": 720, "end_char_pos": 1114}, {"type": "D", "before": "^2 \\times", "after": null, "start_char_pos": 1136, "end_char_pos": 1145}, {"type": "D", "before": "0, 1", "after": null, "start_char_pos": 1146, "end_char_pos": 1150}, {"type": "R", "before": "^{n", "after": "We prove that a shallow", "start_char_pos": 1168, "end_char_pos": 1171}, {"type": "R", "before": "a single hidden layer and", "after": "the", "start_char_pos": 1192, "end_char_pos": 1217}, {"type": "R", "before": "of", "after": "in", "start_char_pos": 1266, "end_char_pos": 1268}, {"type": "A", "before": null, "after": "\\mathbb{R", "start_char_pos": 1297, "end_char_pos": 1297}, {"type": "R", "before": "^{n", "after": "). As a corollary V_t/K but not C_t/K can be approximated by superpositions of logistic functions on \\mathbb{R", "start_char_pos": 1312, "end_char_pos": 1315}, {"type": "R", "before": "^{n", "after": ". This justifies the benefit of time value oriented decision functions in option pricing models", "start_char_pos": 1334, "end_char_pos": 1337}], "sents_char_pos": [0, 456, 535, 719, 957]} {"doc_id": "1910.03337", "revision_depth": "2", "before_revision": " Discovering functional modules in protein-protein interaction networks through optimization remains a longstanding challenge in Biology . Traditional algorithms simply consider strong protein complexes that can be found in the original network by optimizing some metric, which may cause obstacles for the discovery of weak and hidden complexes that are overshadowed by strong complexes. Additionally , protein complexes not only have different densities but also a various range of scales, making them extremely difficult to be detected. We address these issues and propose a hierarchical hidden community detection approach to accurately predict protein complexes of various strengths and scales. We propose a meta-method called HirHide (Hierarchical Hidden Community Detection), which can adopt any standard community detection method as the base algorithm and enable it to discover hierarchical hidden communities as well as boosting the detection on hierarchical strong communities. To our knowledge, this is the first combination of hierarchical structure with hidden structure, which provides a new perspective for finding protein complexes of various strengths and scales . We compare the performance of several standard community detection methods with their HirHide versions. Experimental results show that the HirHide versions achieve better performance and sometimes even significantly outperform the baselines.", "after_revision": "Motivation: Discovering functional modules in protein-protein interaction (PPI) networks by optimization methods remains a longstanding challenge in biology . Traditional algorithms simply consider strong protein complexes that can be found in the original network by optimizing some metrics, which causes obstacles for the discovery of weak and hidden complexes shielded by stronger complexes. Also , protein complexes are not only in different density but also in a large range of scale, making it extremely difficult to be detected. Toward this objective, we propose a hierarchical hidden community approach to predict protein complexes . Results: We propose a method called HirHide (Hierarchical Hidden Community Detection), which can be combined with traditional community detection methods to enable them to discover hierarchical hidden communities . It is the first community detection algorithm that can find a hierarchical structure as well as hidden structure . We compare the performance of three traditional methods with their HirHide versions. Experimental results show that the HirHide methods using traditional methods as the base algorithms achieve better performance , sometimes even significantly outperform the baselines.", "edit_actions": [{"type": "A", "before": null, "after": "Motivation:", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "networks through optimization", "after": "(PPI) networks by optimization methods", "start_char_pos": 63, "end_char_pos": 92}, {"type": "R", "before": "Biology", "after": "biology", "start_char_pos": 129, "end_char_pos": 136}, {"type": "R", "before": "metric, which may cause", "after": "metrics, which causes", "start_char_pos": 264, "end_char_pos": 287}, {"type": "R", "before": "that are overshadowed by strong complexes. Additionally", "after": "shielded by stronger complexes. Also", "start_char_pos": 345, "end_char_pos": 400}, {"type": "R", "before": "not only have different densities but also a various range of scales, making them", "after": "are not only in different density but also in a large range of scale, making it", "start_char_pos": 421, "end_char_pos": 502}, {"type": "R", "before": "We address these issues and", "after": "Toward this objective, we", "start_char_pos": 539, "end_char_pos": 566}, {"type": "R", "before": "detection approach to accurately", "after": "approach to", "start_char_pos": 607, "end_char_pos": 639}, {"type": "R", "before": "of various strengths and scales.", "after": ". Results:", "start_char_pos": 666, "end_char_pos": 698}, {"type": "R", "before": "meta-method", "after": "method", "start_char_pos": 712, "end_char_pos": 723}, {"type": "R", "before": "adopt any standard community detection method as the base algorithm and enable it", "after": "be combined with traditional community detection methods to enable them", "start_char_pos": 792, "end_char_pos": 873}, {"type": "R", "before": "as well as boosting the detection on hierarchical strong communities. To our knowledge, this", "after": ". It", "start_char_pos": 918, "end_char_pos": 1010}, {"type": "R", "before": "combination of hierarchical structure with hidden structure, which provides a new perspective for finding protein complexes of various strengths and scales", "after": "community detection algorithm that can find a hierarchical structure as well as hidden structure", "start_char_pos": 1024, "end_char_pos": 1179}, {"type": "R", "before": "several standard community detection", "after": "three traditional", "start_char_pos": 1212, "end_char_pos": 1248}, {"type": "R", "before": "versions", "after": "methods using traditional methods as the base algorithms", "start_char_pos": 1329, "end_char_pos": 1337}, {"type": "R", "before": "and", "after": ",", "start_char_pos": 1365, "end_char_pos": 1368}], "sents_char_pos": [0, 138, 387, 538, 698, 987, 1181, 1285]} {"doc_id": "1910.07747", "revision_depth": "1", "before_revision": "In recent , deep learning-based feature representation methods have shown a promising impact in electroencephalography (EEG)-based brain-computer interface (BCI). Nonetheless, due to high intra- and inter-subject variabilities, many studies on decoding EEG were designed in a subject-specific manner by using calibration samples, with no much concern on its less practical, time-consuming, and data-hungry process. To tackle this problem, recent studies took advantage of transfer learning, especially using domain adaptation techniques . However, there still remain two challenging limitations ; i) most domain adaptation methods are designed for labeled source and unlabeled target domain whereas BCI tasks generally have multiple annotated domains. ii) most of the methods do not consider negatively transferable to disrupt generalization ability. In this paper, we propose a novel network architecture to tackle those limitations by estimating mutual information in high-level representation and low-level representation, separately . Specifically, our proposed method extracts domain-invariant and class-relevant features, thereby enhancing generalizability in classification across . It is also noteworthy that our method can be applicable to a new subject with a small amount of data via a fine-tuning, step only, reducing calibration time for practical uses. We validated our proposed method on a big motor imagery EEG dataset by showing promising results, compared to competing methodsconsidered in our experiments .", "after_revision": "In recent years , deep learning-based feature representation methods have shown a promising impact in electroencephalography (EEG)-based brain-computer interface (BCI). Nonetheless, there still exist BCI-illiterate subjects who struggle to use BCI systems, showing high intra-subject variabilities. Several methods have been proposed to enhance their performance via transfer learning. In such a case, high inter- and intrasubject variabilities are the points to be considered. Transfer learning, especially as a domain adaptation technique, has drawn increasing attention in various fields . However, the adaptation of approaches into BCI faces two challenging limitations . (i) Most domain adaptation methods are designed for labeled source and unlabeled target domain whereas BCI tasks generally have multiple annotated domains. (ii) Most of the existing methods do not consider a negative transfer to disrupt generalization ability. In this paper, we propose a novel network architecture to tackle these limitations by estimating mutual information in high- and low-level representations regardless of the domain that is considered as a subject in this paper . Specifically, our proposed method extracts subject-invariant and class-relevant features, thereby enhancing generalizability in overall classification . It is also noteworthy that our method can be applicable to a new subject with a small amount of data via fine-tuning, thus reducing calibration time for its practical uses. We validated our proposed method on two large motor imagery EEG datasets via comparisons with other competing methods .", "edit_actions": [{"type": "A", "before": null, "after": "years", "start_char_pos": 10, "end_char_pos": 10}, {"type": "R", "before": "due to", "after": "there still exist BCI-illiterate subjects who struggle to use BCI systems, showing high intra-subject variabilities. Several methods have been proposed to enhance their performance via transfer learning. In such a case,", "start_char_pos": 177, "end_char_pos": 183}, {"type": "R", "before": "intra- and inter-subject variabilities, many studies on decoding EEG were designed in a subject-specific manner by using calibration samples, with no much concern on its less practical, time-consuming, and data-hungry process. To tackle this problem, recent studies took advantage of transfer", "after": "inter- and intrasubject variabilities are the points to be considered. Transfer", "start_char_pos": 189, "end_char_pos": 481}, {"type": "R", "before": "using domain adaptation techniques", "after": "as a domain adaptation technique, has drawn increasing attention in various fields", "start_char_pos": 503, "end_char_pos": 537}, {"type": "R", "before": "there still remain", "after": "the adaptation of approaches into BCI faces", "start_char_pos": 549, "end_char_pos": 567}, {"type": "R", "before": "; i) most", "after": ". (i) Most", "start_char_pos": 596, "end_char_pos": 605}, {"type": "R", "before": "ii) most of the", "after": "(ii) Most of the existing", "start_char_pos": 753, "end_char_pos": 768}, {"type": "R", "before": "negatively transferable", "after": "a negative transfer", "start_char_pos": 793, "end_char_pos": 816}, {"type": "R", "before": "those", "after": "these", "start_char_pos": 917, "end_char_pos": 922}, {"type": "R", "before": "high-level representation", "after": "high-", "start_char_pos": 971, "end_char_pos": 996}, {"type": "R", "before": "representation, separately", "after": "representations regardless of the domain that is considered as a subject in this paper", "start_char_pos": 1011, "end_char_pos": 1037}, {"type": "R", "before": "domain-invariant", "after": "subject-invariant", "start_char_pos": 1083, "end_char_pos": 1099}, {"type": "R", "before": "classification across", "after": "overall classification", "start_char_pos": 1167, "end_char_pos": 1188}, {"type": "D", "before": "a", "after": null, "start_char_pos": 1296, "end_char_pos": 1297}, {"type": "R", "before": "step only,", "after": "thus", "start_char_pos": 1311, "end_char_pos": 1321}, {"type": "A", "before": null, "after": "its", "start_char_pos": 1352, "end_char_pos": 1352}, {"type": "R", "before": "a big", "after": "two large", "start_char_pos": 1405, "end_char_pos": 1410}, {"type": "R", "before": "dataset by showing promising results, compared to competing methodsconsidered in our experiments", "after": "datasets via comparisons with other competing methods", "start_char_pos": 1429, "end_char_pos": 1525}], "sents_char_pos": [0, 163, 415, 539, 597, 851, 1039, 1190, 1368]} {"doc_id": "1910.10476", "revision_depth": "1", "before_revision": "In dominant livestock systems of Sahelian countries herds have to move across the territories. Their mobility is often source of conflict with farmers of the crossed areas , and engender the spread of diseases such as Rift Valley Fever. Knowledge of the routes followed by herds is therefore central to guide the implementation of preventive and control measures for transboundary animal diseases, land use planning and conflict management. However, the lack of quantitative data on livestock movements, together with the high temporal and spatial variability of herd movements, has so far hampered the production of fine resolution maps of passage of animals . This paper proposes a general framework for mapping potential paths for livestock movements and identify areas of high potential of animal passage for these movements. The method consists in coupling the information contained in livestock mobility network with landscape connectivity based on different mobility conductance layers. We illustrate our approach with a livestock mobility network in Senegal and Mauritania in dry and wet seasons in 2014.", "after_revision": "In the dominant livestock systems of Sahelian countries herds have to move across territories. Their mobility is often a source of conflict with farmers in the areas crossed, and helps spread diseases such as Rift Valley Fever. Knowledge of the routes followed by herds is therefore core to guiding the implementation of preventive and control measures for transboundary animal diseases, land use planning and conflict management. However, the lack of quantitative data on livestock movements, together with the high temporal and spatial variability of herd movements, has so far hampered the production of fine resolution maps of animal movements . This paper proposes a general framework for mapping potential paths for livestock movements and identifying areas of high animal passage potential for those movements. The method consists in combining the information contained in livestock mobility networks with landscape connectivity , based on different mobility conductance layers. We illustrate our approach with a livestock mobility network in Senegal and Mauritania in the 2014 dry and wet seasons .", "edit_actions": [{"type": "A", "before": null, "after": "the", "start_char_pos": 3, "end_char_pos": 3}, {"type": "D", "before": "the", "after": null, "start_char_pos": 79, "end_char_pos": 82}, {"type": "A", "before": null, "after": "a", "start_char_pos": 120, "end_char_pos": 120}, {"type": "R", "before": "of the crossed areas , and engender the spread of", "after": "in the areas crossed, and helps spread", "start_char_pos": 153, "end_char_pos": 202}, {"type": "R", "before": "central to guide", "after": "core to guiding", "start_char_pos": 294, "end_char_pos": 310}, {"type": "R", "before": "passage of animals", "after": "animal movements", "start_char_pos": 643, "end_char_pos": 661}, {"type": "R", "before": "identify", "after": "identifying", "start_char_pos": 760, "end_char_pos": 768}, {"type": "R", "before": "potential of animal passage for these", "after": "animal passage potential for those", "start_char_pos": 783, "end_char_pos": 820}, {"type": "R", "before": "coupling", "after": "combining", "start_char_pos": 855, "end_char_pos": 863}, {"type": "R", "before": "network", "after": "networks", "start_char_pos": 912, "end_char_pos": 919}, {"type": "A", "before": null, "after": ",", "start_char_pos": 948, "end_char_pos": 948}, {"type": "A", "before": null, "after": "the 2014", "start_char_pos": 1087, "end_char_pos": 1087}, {"type": "R", "before": "in 2014.", "after": ".", "start_char_pos": 1108, "end_char_pos": 1116}], "sents_char_pos": [0, 95, 238, 442, 663, 831, 996]} {"doc_id": "1911.00739", "revision_depth": "1", "before_revision": "The growth process of fungal hyphae involves transport of nutrients from the sub-apical region of the fungus to the growing tip of hyphae. At the tip domain, the URLanelle -Spitzenkorper uses the nutrients that are packaged as membrane bound vesicles to synthesize fresh cell wall. This in turn leads to elongation and branching of the hyphae and formation of a single colony mycelium. We propose a driven lattice gas model which incorporates the processes of vesicle supply dependent linear extension and branching of the tips of the hyphae, apart from process of motor driven transport of vesicles along the hyphae towards the growing tips . We first analyze the growth and transport process in the primary hypha which is subject to particle loss due to presence of branching sites. We show that the spatial profile of vesicles along the growing lattice is an exponential distribution, while the length grows logarithmically with time. We also obtain the probability distribution of length of the hypha , and find that it tends to a Gaussian distribution function at late times. In contrast, we find that probability distribution function of the time required for growth to a specific length is a broad log-normal distribution. We simulate the resultant 2-d morphology generated by the growing primary lattice , quantifying the motility behavior and morphological characteristics of the colony in terms of measures such as the Center of Mass, Radius of Gyration and Aspect Ratio . Analysis of the temporal behavior and morphological characteristics of the resultant 2-d morphology reveals a wide variability of these characteristics which depend on the input parameters which characterize the branching and elongation dynamics of the individual hyphae. We make quantitative comparison of the predictions of our model with experiments .", "after_revision": "We present a minimal driven lattice gas model which generates the morphological characteristics associated with single colony mycelium arising from the growth and branching process of fungal hyphae, which is fed by a single source of nutrients . We first analyze the growth and transport process in the primary hypha modeled as a growing 1-d lattice, which is subject to particle (vesicle) loss due to presence of dynamically created branching sites. We show that the spatial profile of vesicles along the growing lattice is an exponential distribution, while the length grows logarithmically with time. We also find that the probability distribution of length of the hypha tends to a Gaussian distribution function at late times. In contrast, the probability distribution function of the time required for growth to a specific length tends to a broad log-normal distribution. We simulate the resultant 2-d morphology generated by the growing primary hypha , quantifying the motility behavior and morphological characteristics of the colony . Analysis of the temporal behavior and morphological characteristics of the resultant 2-d morphology reveals a wide variability of these characteristics which depend on the input parameters which characterize the branching and elongation dynamics of the hyphae. By calibrating the input parameters for our model, we make some quantitative comparison of the predictions of our model with the observed experimental growth characteristics of fungal hyphae and the morphological characteristics of single colony fungal mycelium .", "edit_actions": [{"type": "R", "before": "The growth process of fungal hyphae involves transport of nutrients from the sub-apical region of the fungus to the growing tip of hyphae. At the tip domain, the URLanelle -Spitzenkorper uses the nutrients that are packaged as membrane bound vesicles to synthesize fresh cell wall. This in turn leads to elongation and branching of the hyphae and formation of a single colony mycelium. We propose a", "after": "We present a minimal", "start_char_pos": 0, "end_char_pos": 398}, {"type": "R", "before": "incorporates the processes of vesicle supply dependent linear extension and branching of the tips of the hyphae, apart from process of motor driven transport of vesicles along the hyphae towards the growing tips", "after": "generates the morphological characteristics associated with single colony mycelium arising from the growth and branching process of fungal hyphae, which is fed by a single source of nutrients", "start_char_pos": 430, "end_char_pos": 641}, {"type": "A", "before": null, "after": "modeled as a growing 1-d lattice,", "start_char_pos": 715, "end_char_pos": 715}, {"type": "A", "before": null, "after": "(vesicle)", "start_char_pos": 745, "end_char_pos": 745}, {"type": "A", "before": null, "after": "dynamically created", "start_char_pos": 770, "end_char_pos": 770}, {"type": "R", "before": "obtain", "after": "find that", "start_char_pos": 949, "end_char_pos": 955}, {"type": "D", "before": ", and find that it", "after": null, "start_char_pos": 1008, "end_char_pos": 1026}, {"type": "R", "before": "we find that", "after": "the", "start_char_pos": 1097, "end_char_pos": 1109}, {"type": "R", "before": "is", "after": "tends to", "start_char_pos": 1197, "end_char_pos": 1199}, {"type": "R", "before": "lattice", "after": "hypha", "start_char_pos": 1307, "end_char_pos": 1314}, {"type": "D", "before": "in terms of measures such as the Center of Mass, Radius of Gyration and Aspect Ratio", "after": null, "start_char_pos": 1399, "end_char_pos": 1483}, {"type": "R", "before": "individual hyphae. We make", "after": "hyphae. By calibrating the input parameters for our model, we make some", "start_char_pos": 1739, "end_char_pos": 1765}, {"type": "R", "before": "experiments", "after": "the observed experimental growth characteristics of fungal hyphae and the morphological characteristics of single colony fungal mycelium", "start_char_pos": 1827, "end_char_pos": 1838}], "sents_char_pos": [0, 138, 281, 385, 643, 787, 940, 1083, 1232, 1485, 1757]} {"doc_id": "1911.02095", "revision_depth": "1", "before_revision": "The rapid growth in biological sequence data is revolutionizing our understanding of genotypic diversity and challenging conventional approaches to informatics. Due to increasing availability of genomic data, traditional bioinformatic tools require substantial computational time and creation of ever larger indices each time a researcher seeks to gain insight from the data. To address these challenges, we pre-compute important relationships between biological entities and capture this information in a relational database. The database can be queried across millions of entities and returns results in a fraction of the time required by traditional methods. In this paper, we describeOMXWare , a comprehensive database relating genotype to phenotype for bacterial life. Continually updated, OMXWare today contains data derived from 200,000 curated, self-consistently assembled genomes. The database stores functional data for over 68 million genes, 52 million proteins, and 239 million domains with associated biological activity annotations from GeneOntology , KEGG, MetaCyc, and Reactome. OMXWare maps connections between each biological entity including the originating genome, gene, protein, and protein domain. Various microbial studies, from infectious disease to environmental health, can benefit from the rich data and relationships within OMXWare . We describe the data selection, the pipeline to create and update OMXWare, and developer tools (Python SDK and Rest APIs) which allow researchers to efficiently study microbial life at scale.", "after_revision": "The rapid growth in biological sequence data is revolutionizing our understanding of genotypic diversity and challenging conventional approaches to informatics. With the increasing availability of genomic data, traditional bioinformatic tools require substantial computational time and the creation of ever-larger indices each time a researcher seeks to gain insight from the data. To address these challenges, we pre-computed important relationships between biological entities spanning the Central Dogma of Molecular Biology and captured this information in a relational database. The database can be queried across hundreds of millions of entities and returns results in a fraction of the time required by traditional methods. In this paper, we describeIBM Functional Genomics Platform (formerly known as OMXWare) , a comprehensive database relating genotype to phenotype for bacterial life. Continually updated, IBM Functional Genomics Platform today contains data derived from 200,000 curated, self-consistently assembled genomes. The database stores functional data for over 68 million genes, 52 million proteins, and 239 million domains with associated biological activity annotations from Gene Ontology , KEGG, MetaCyc, and Reactome. IBM Functional Genomics Platform maps all of the many-to-many connections between each biological entity including the originating genome, gene, protein, and protein domain. Various microbial studies, from infectious disease to environmental health, can benefit from the rich data and connections . We describe the data selection, the pipeline to create and update the IBM Functional Genomics Platform, and the developer tools (Python SDK and REST APIs) which allow researchers to efficiently study microbial life at scale.", "edit_actions": [{"type": "R", "before": "Due to", "after": "With the", "start_char_pos": 161, "end_char_pos": 167}, {"type": "R", "before": "creation of ever larger", "after": "the creation of ever-larger", "start_char_pos": 284, "end_char_pos": 307}, {"type": "R", "before": "pre-compute", "after": "pre-computed", "start_char_pos": 408, "end_char_pos": 419}, {"type": "R", "before": "and capture", "after": "spanning the Central Dogma of Molecular Biology and captured", "start_char_pos": 472, "end_char_pos": 483}, {"type": "A", "before": null, "after": "hundreds of", "start_char_pos": 562, "end_char_pos": 562}, {"type": "R", "before": "describeOMXWare", "after": "describe", "start_char_pos": 681, "end_char_pos": 696}, {"type": "A", "before": null, "after": "IBM Functional Genomics Platform", "start_char_pos": 696, "end_char_pos": 696}, {"type": "A", "before": null, "after": "(formerly known as OMXWare)", "start_char_pos": 697, "end_char_pos": 697}, {"type": "R", "before": "OMXWare", "after": "IBM Functional Genomics Platform", "start_char_pos": 797, "end_char_pos": 804}, {"type": "R", "before": "GeneOntology", "after": "Gene Ontology", "start_char_pos": 1053, "end_char_pos": 1065}, {"type": "R", "before": "OMXWare maps", "after": "IBM Functional Genomics Platform maps all of the many-to-many", "start_char_pos": 1097, "end_char_pos": 1109}, {"type": "R", "before": "relationships within OMXWare", "after": "connections", "start_char_pos": 1333, "end_char_pos": 1361}, {"type": "R", "before": "OMXWare, and", "after": "the IBM Functional Genomics Platform, and the", "start_char_pos": 1430, "end_char_pos": 1442}, {"type": "R", "before": "Rest", "after": "REST", "start_char_pos": 1475, "end_char_pos": 1479}], "sents_char_pos": [0, 160, 375, 526, 662, 775, 891, 1096, 1221, 1363]} {"doc_id": "1911.04848", "revision_depth": "1", "before_revision": "This paper presents an expert-guided Mixed-Initiative (MI) variable-autonomy controller for remotely operated mobile robots. The controller enables switching between different Level(s) of Autonomy (LOA) during task execution initiated by either the human operator or/andthe robot. The controller is evaluated in two Search and Rescue (SAR) inspired experiments, one with a simulated robot and test arena and one with a real robot in a realistic environment. Our hypothesis is that timely switching between different LOAs will enable the system to overcome various performance degrading factors and thus enable superior performance compared to systems in which LOAs cannot switch on-the-fly. Statistically validated analyses from the two experiments provide evidence that: a) Human-Initiative (HI) systems outperform purely teleoperated or autonomous systems in navigation tasks; b) MI systems provide improved performance in navigation tasks, improved operator performance in cognitive demanding secondary tasks, and improved operator workload . Results also reinforce previous Human-Robot interaction (HRI) evidence regarding the importance of the operator's personality traits and their trust in the autonomous system. Lastly, the paper provides empirical evidence that identify two major challenges for MI control: a) the design of context-aware MI controllers ; and b) the conflict for control between the robot and the operator .", "after_revision": "This paper presents an Expert-guided Mixed-Initiative Control Switcher (EMICS) for remotely operated mobile robots. The EMICS enables switching between different levels of autonomy during task execution initiated by either the human operator and/or the EMICS. The EMICS is evaluated in two disaster response inspired experiments, one with a simulated robot and test arena , and one with a real robot in a realistic environment. Analyses from the two experiments provide evidence that: a) Human-Initiative (HI) systems outperform systems with single modes of operation, such as pure teleoperation, in navigation tasks; b) in the context of the simulated robot experiment, Mixed-Initiative (MI) systems provide improved performance in navigation tasks, improved operator performance in cognitive demanding secondary tasks, and improved operator workload compared to HI . Results also reinforce previous human-robot interaction evidence regarding the importance of the operator's personality traits and their trust in the autonomous system. Lastly, our experiment on a physical robot provides empirical evidence that identify two major challenges for MI control: a) the design of context-aware MI control systems ; and b) the conflict for control between the robot 's MI control system and the operator . Insights regarding these challenges are discussed and ways to tackle them are proposed .", "edit_actions": [{"type": "R", "before": "expert-guided", "after": "Expert-guided", "start_char_pos": 23, "end_char_pos": 36}, {"type": "R", "before": "(MI) variable-autonomy controller", "after": "Control Switcher (EMICS)", "start_char_pos": 54, "end_char_pos": 87}, {"type": "R", "before": "controller", "after": "EMICS", "start_char_pos": 129, "end_char_pos": 139}, {"type": "R", "before": "Level(s) of Autonomy (LOA)", "after": "levels of autonomy", "start_char_pos": 176, "end_char_pos": 202}, {"type": "R", "before": "or/andthe robot. The controller", "after": "and/or the EMICS. The EMICS", "start_char_pos": 264, "end_char_pos": 295}, {"type": "R", "before": "Search and Rescue (SAR)", "after": "disaster response", "start_char_pos": 316, "end_char_pos": 339}, {"type": "A", "before": null, "after": ",", "start_char_pos": 404, "end_char_pos": 404}, {"type": "R", "before": "Our hypothesis is that timely switching between different LOAs will enable the system to overcome various performance degrading factors and thus enable superior performance compared to systems in which LOAs cannot switch on-the-fly. Statistically validated analyses", "after": "Analyses", "start_char_pos": 459, "end_char_pos": 724}, {"type": "R", "before": "purely teleoperated or autonomous systems", "after": "systems with single modes of operation, such as pure teleoperation,", "start_char_pos": 817, "end_char_pos": 858}, {"type": "R", "before": "MI", "after": "in the context of the simulated robot experiment, Mixed-Initiative (MI)", "start_char_pos": 883, "end_char_pos": 885}, {"type": "A", "before": null, "after": "compared to HI", "start_char_pos": 1045, "end_char_pos": 1045}, {"type": "R", "before": "Human-Robot interaction (HRI)", "after": "human-robot interaction", "start_char_pos": 1080, "end_char_pos": 1109}, {"type": "R", "before": "the paper", "after": "our experiment on a physical robot", "start_char_pos": 1231, "end_char_pos": 1240}, {"type": "R", "before": "controllers", "after": "control systems", "start_char_pos": 1354, "end_char_pos": 1365}, {"type": "A", "before": null, "after": "'s MI control system", "start_char_pos": 1418, "end_char_pos": 1418}, {"type": "A", "before": null, "after": ". Insights regarding these challenges are discussed and ways to tackle them are proposed", "start_char_pos": 1436, "end_char_pos": 1436}], "sents_char_pos": [0, 124, 280, 458, 691, 879, 1222, 1367]} {"doc_id": "1911.05146", "revision_depth": "1", "before_revision": "The enormous amount of data and computation required to train DNNshave led to the rise of various parallelization strategies . Broadly, there are two strategies: 1) Data-Parallelism -- replicating the DNN on multiple processes and training on different training samples, and 2) Model-Parallelism -- dividing elements of the DNN itself into partitions across different processes . While data-parallelism has been extensively studied and developed, model-parallelism has received less attention as it is non-trivial to split the model across processes . In this paper, we propose HyPar-Flow : a framework for scalable and user-transparent parallel training of very large DNNs (up to 5, 000 layers). We exploit TensorFlow's Eager Execution features and KerasAPIs for model definition and distribution . HyPar-Flow exposes a simple API to offer data, model, and hybrid (model + data) parallel training for models defined using the Keras API. Under the hood, we introduce MPI communication primitives like send and recv on layer boundaries for data exchange between model-partitions and allreduce for gradient exchange across model-replicas. Our proposed designs in HyPar-Flow offer up to 3.1x speedup over sequential training for ResNet-110 and up to 1.6x speedup over Horovod-based data-parallel training for ResNet-1001; a model that has 1, 001 layers and 30 million parameters. We provide an in-depth performance characterization of the HyPar-Flow framework on multiple HPC systems with diverse CPU architectures including Intel Xeon(s)and AMD EPYC. HyPar-Flow provides 110x speed up on 128 nodes of the Stampede2 cluster at TACC for hybrid-parallel training of ResNet-1001 .", "after_revision": "To reduce training time of large-scale DNNs, scientists have started to explore parallelization strategies like data-parallelism, model-parallelism, and hybrid-parallelism . While data-parallelism has been extensively studied and developed, several problems exist in realizing model-parallelism and hybrid-parallelism efficiently. Four major problems we focus on are: 1) defining a notion of a distributed model across processes , 2) implementing forward/back-propagation across process boundaries that requires explicit communication, 3) obtaining parallel speedup on an inherently sequential task, and 4) achieving scalability without losing out on a model's accuracy. To address these problems, we create HyPar-Flow --- a model-size/-type agnostic, scalable, practical, and user-transparent system for hybrid-parallel training by exploiting MPI, Keras, and TensorFlow . HyPar-Flow provides a single API that can be used to perform data, model, and hybrid parallel training of any Keras model at scale. We create an internal distributed representation of the user-provided Keras model, utilize TF's Eager execution features for distributed forward/back-propagation across processes, exploit pipelining to improve performance and leverage efficient MPI primitives for scalable communication. Between model partitions, we use send and recv to exchange layer-data/partial-errors while allreduce is used to accumulate/average gradients across model replicas. Beyond the design and implementation of HyPar-Flow, we also provide comprehensive correctness and performance results on three state-of-the-art HPC systems including TACC Frontera (#5 on URL). For ResNet-1001, an ultra-deep model, HyPar-Flow provides: 1) Up to 1.6x speedup over Horovod-based data-parallel training, 2) 110x speedup over single-node on 128 Stampede2 nodes, and 3) 481x speedup over single-node on 512 Frontera nodes .", "edit_actions": [{"type": "R", "before": "The enormous amount of data and computation required to train DNNshave led to the rise of various parallelization strategies . Broadly, there are two strategies: 1) Data-Parallelism -- replicating the DNN on multiple processes and training on different training samples, and 2) Model-Parallelism -- dividing elements of the DNN itself into partitions across different processes", "after": "To reduce training time of large-scale DNNs, scientists have started to explore parallelization strategies like data-parallelism, model-parallelism, and hybrid-parallelism", "start_char_pos": 0, "end_char_pos": 377}, {"type": "A", "before": null, "after": "several problems exist in realizing", "start_char_pos": 447, "end_char_pos": 447}, {"type": "R", "before": "has received less attention as it is non-trivial to split the", "after": "and hybrid-parallelism efficiently. Four major problems we focus on are: 1) defining a notion of a distributed", "start_char_pos": 466, "end_char_pos": 527}, {"type": "R", "before": ". In this paper, we propose", "after": ", 2) implementing forward/back-propagation across process boundaries that requires explicit communication, 3) obtaining parallel speedup on an inherently sequential task, and 4) achieving scalability without losing out on a model's accuracy. To address these problems, we create", "start_char_pos": 551, "end_char_pos": 578}, {"type": "R", "before": ": a framework for scalable", "after": "--- a model-size/-type agnostic, scalable, practical,", "start_char_pos": 590, "end_char_pos": 616}, {"type": "R", "before": "parallel training of very large DNNs (up to 5, 000 layers). We exploit TensorFlow's Eager Execution features and KerasAPIs for model definition and distribution", "after": "system for hybrid-parallel training by exploiting MPI, Keras, and TensorFlow", "start_char_pos": 638, "end_char_pos": 798}, {"type": "R", "before": "exposes a simple API to offer", "after": "provides a single API that can be used to perform", "start_char_pos": 812, "end_char_pos": 841}, {"type": "R", "before": "(model + data) parallel training for models defined using the Keras API. Under the hood, we introduce MPI communication primitives like", "after": "parallel training of any Keras model at scale. We create an internal distributed representation of the user-provided Keras model, utilize TF's Eager execution features for distributed forward/back-propagation across processes, exploit pipelining to improve performance and leverage efficient MPI primitives for scalable communication. Between model partitions, we use", "start_char_pos": 866, "end_char_pos": 1001}, {"type": "R", "before": "on layer boundaries for data exchange between model-partitions and allreduce for gradient exchange across model-replicas. Our proposed designs in", "after": "to exchange layer-data/partial-errors while allreduce is used to accumulate/average gradients across model replicas. Beyond the design and implementation of HyPar-Flow, we also provide comprehensive correctness and performance results on three state-of-the-art HPC systems including TACC Frontera (#5 on URL). For ResNet-1001, an ultra-deep model,", "start_char_pos": 1016, "end_char_pos": 1161}, {"type": "R", "before": "offer up to 3.1x speedup over sequential training for ResNet-110 and up to", "after": "provides: 1) Up to", "start_char_pos": 1173, "end_char_pos": 1247}, {"type": "R", "before": "training for ResNet-1001; a model that has 1, 001 layers and 30 million parameters. We provide an in-depth performance characterization of the HyPar-Flow framework on multiple HPC systems with diverse CPU architectures including Intel Xeon(s)and AMD EPYC. HyPar-Flow provides", "after": "training, 2)", "start_char_pos": 1294, "end_char_pos": 1569}, {"type": "R", "before": "speed up", "after": "speedup over single-node", "start_char_pos": 1575, "end_char_pos": 1583}, {"type": "D", "before": "nodes of the", "after": null, "start_char_pos": 1591, "end_char_pos": 1603}, {"type": "R", "before": "cluster at TACC for hybrid-parallel training of ResNet-1001", "after": "nodes, and 3) 481x speedup over single-node on 512 Frontera nodes", "start_char_pos": 1614, "end_char_pos": 1673}], "sents_char_pos": [0, 379, 552, 697, 800, 938, 1137, 1319, 1377, 1549]} {"doc_id": "1911.06716", "revision_depth": "2", "before_revision": "Assortment optimization is an important problem that arises in many practical applications such as retailing and online advertising where the goal is to find a subset of products from a universe of substitutable products that maximize a seller's expected revenue. The demand and the revenue depend on the substitution behavior of the customers that is captured by a choice model. One of the key challenges is to find the right model for the customer substitution behavior. Many parametric random utility based models have been considered in the literature to capture substitution . However, in all these models, the probability of purchase increases as we add more options to the assortment. This is not true in general and in many settings , the probability of purchase may decrease if we add more products to the assortment, referred to as the choice overload. In this paper we attempt to address these serious limitations and propose a generalization of the Markov chain based choice model considered in Blanchet et al. In particular, we handle dynamic preferences and the choice overload phenomenon using a Markovian comparison model that is a generalization of the Markovian substitution framework of Blanchet et al. The Markovian comparison framework allows us to implicitly model the search cost in the choice process and thereby, modeling both dynamic preferences as well as the choice overload phenomenon. We consider the assortment optimization problem for the special case of our generalized Markov chain model where the underlying Markov chain is rank-1 (this is a generalization of the Multinomial Logit model). We show that the assortment optimization problem under this model is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) for this problem .", "after_revision": "Assortment optimization is an important problem that arises in many industries such as retailing and online advertising where the goal is to find a subset of products from a universe of substitutable products which maximize seller's expected revenue. One of the key challenges in this problem is to model the customer substitution behavior. Many parametric random utility maximization (RUM) based choice models have been considered in the literature . However, in all these models, probability of purchase increases as we include more products to an assortment. This is not true in general and in many settings more choices hurt sales. This is commonly referred to as the choice overload. In this paper we attempt to address this limitation in RUM through a generalization of the Markov chain based choice model considered in Blanchet et al. (2016). As a special case, we show that our model reduces to a generalization of MNL with no-purchase attractions dependent on the assortment S and strictly increasing with the size of assortment S. While we show that the assortment optimization under this model is NP-hard , we present fully polynomial-time approximation scheme (FPTAS) under reasonable assumptions .", "edit_actions": [{"type": "R", "before": "practical applications", "after": "industries", "start_char_pos": 68, "end_char_pos": 90}, {"type": "R", "before": "that maximize a", "after": "which maximize", "start_char_pos": 221, "end_char_pos": 236}, {"type": "D", "before": "The demand and the revenue depend on the substitution behavior of the customers that is captured by a choice model.", "after": null, "start_char_pos": 264, "end_char_pos": 379}, {"type": "R", "before": "is to find the right model for", "after": "in this problem is to model", "start_char_pos": 406, "end_char_pos": 436}, {"type": "R", "before": "based", "after": "maximization (RUM) based choice", "start_char_pos": 504, "end_char_pos": 509}, {"type": "D", "before": "to capture substitution", "after": null, "start_char_pos": 556, "end_char_pos": 579}, {"type": "D", "before": "the", "after": null, "start_char_pos": 612, "end_char_pos": 615}, {"type": "R", "before": "add more options to the", "after": "include more products to an", "start_char_pos": 656, "end_char_pos": 679}, {"type": "R", "before": ", the probability of purchase may decrease if we add more products to the assortment,", "after": "more choices hurt sales. This is commonly", "start_char_pos": 741, "end_char_pos": 826}, {"type": "R", "before": "these serious limitations and propose", "after": "this limitation in RUM through", "start_char_pos": 899, "end_char_pos": 936}, {"type": "R", "before": "In particular, we handle dynamic preferences and the choice overload phenomenon using a Markovian comparison model that is", "after": "(2016). As a special case, we show that our model reduces to", "start_char_pos": 1023, "end_char_pos": 1145}, {"type": "R", "before": "the Markovian substitution framework of Blanchet et al. The Markovian comparison framework allows us to implicitly model the search cost in the choice process and thereby, modeling both dynamic preferences as well as the choice overload phenomenon. We consider the assortment optimization problem for the special case of our generalized Markov chain model where the underlying Markov chain is rank-1 (this is a generalization of the Multinomial Logit model). We", "after": "MNL with no-purchase attractions dependent on the assortment S and strictly increasing with the size of assortment S. While we", "start_char_pos": 1166, "end_char_pos": 1627}, {"type": "D", "before": "problem", "after": null, "start_char_pos": 1666, "end_char_pos": 1673}, {"type": "R", "before": "and present a", "after": ", we present", "start_char_pos": 1702, "end_char_pos": 1715}, {"type": "R", "before": "for this problem", "after": "under reasonable assumptions", "start_char_pos": 1767, "end_char_pos": 1783}], "sents_char_pos": [0, 263, 379, 472, 581, 691, 862, 1022, 1221, 1414, 1624]} {"doc_id": "1911.07085", "revision_depth": "1", "before_revision": "This paper studies causal inference in randomized experiments under network interference. Most existing models of interference posit that treatments assigned to alters only affect the ego 's response through a low-dimensional exposure mapping, which only depends on units within some known network radius around the ego. We propose a substantially weaker \"approximate neighborhood interference\" (ANI) assumption, which allows treatments assigned to alters far from the ego to have a small , but potentially nonzero, impact on the ego's response. Unlike the exposure mapping model, we can show that ANI is satisfied in well-known models of social interactions. Despite its generality, inference in a single-network setting is still possible under ANI, as we prove that standard inverse-probability weighting estimators can consistently estimate treatment and spillover effects and are asymptotically normal . For practical inference, we propose a new conservative variance estimatorbased on a network bootstrap and suggest a data-dependent bandwidth using the network diameter. Finally, we illustrate our results in a simulation study and empirical application .", "after_revision": "This paper studies causal inference in randomized experiments under network interference. Most of the literature assumes a model of interference under which treatments assigned to alters beyond a certain network distance from the ego have no effect on the ego 's response . However, many models of social interactions do not satisfy this assumption. This paper proposes a substantially weaker model of \"approximate neighborhood interference\" (ANI) , under which treatments assigned to alters further from the ego have a smaller , but potentially nonzero, impact on the ego's response. We show that ANI is satisfied in well-known models of social interactions. We also prove that, under ANI, standard inverse-probability weighting estimators can consistently estimate useful exposure effects and are asymptotically normal under asymptotics taking the network size large. For inference, we consider a network HAC variance estimator. Under a finite population model, we show the estimator is biased but that the bias can be interpreted as the variance of unit-level exposure effects. This generalizes Neyman's well-known result on conservative variance estimation to settings with interference .", "edit_actions": [{"type": "R", "before": "existing models of interference posit that", "after": "of the literature assumes a model of interference under which", "start_char_pos": 95, "end_char_pos": 137}, {"type": "R", "before": "only affect the ego", "after": "beyond a certain network distance from the ego have no effect on the ego", "start_char_pos": 168, "end_char_pos": 187}, {"type": "R", "before": "through a low-dimensional exposure mapping, which only depends on units within some known network radius around the ego. We propose", "after": ". However, many models of social interactions do not satisfy this assumption. This paper proposes", "start_char_pos": 200, "end_char_pos": 331}, {"type": "A", "before": null, "after": "model of", "start_char_pos": 355, "end_char_pos": 355}, {"type": "R", "before": "assumption, which allows", "after": ", under which", "start_char_pos": 402, "end_char_pos": 426}, {"type": "R", "before": "far", "after": "further", "start_char_pos": 457, "end_char_pos": 460}, {"type": "R", "before": "to have a small", "after": "have a smaller", "start_char_pos": 474, "end_char_pos": 489}, {"type": "R", "before": "Unlike the exposure mapping model, we can", "after": "We", "start_char_pos": 547, "end_char_pos": 588}, {"type": "R", "before": "Despite its generality, inference in a single-network setting is still possible", "after": "We also prove that,", "start_char_pos": 661, "end_char_pos": 740}, {"type": "D", "before": "as we prove that", "after": null, "start_char_pos": 752, "end_char_pos": 768}, {"type": "R", "before": "treatment and spillover", "after": "useful exposure", "start_char_pos": 845, "end_char_pos": 868}, {"type": "R", "before": ". For practical", "after": "under asymptotics taking the network size large. For", "start_char_pos": 907, "end_char_pos": 922}, {"type": "R", "before": "propose a new conservative variance estimatorbased on a network bootstrap and suggest a data-dependent bandwidth using the network diameter. Finally, we illustrate our results in a simulation study and empirical application", "after": "consider a network HAC variance estimator. Under a finite population model, we show the estimator is biased but that the bias can be interpreted as the variance of unit-level exposure effects. This generalizes Neyman's well-known result on conservative variance estimation to settings with interference", "start_char_pos": 937, "end_char_pos": 1160}], "sents_char_pos": [0, 89, 320, 546, 660, 908, 1077]} {"doc_id": "1911.10176", "revision_depth": "1", "before_revision": "User embodiment is important for many Virtual Reality (VR) applications, for example, in the context of social interaction, therapy, training, or entertainment. Yet , there is no commonly agreed-on or validated way to empirically measure the perception of embodiment necessary to evaluate this important quality of user experience (UX). We present a virtual embodiment questionnaire (VEQ) to assess components of virtual embodiment in a valid, reliable, and consistent fashion . We reviewed previous literature to identify applicable concepts and items and performed a confirmatory factor analysis on the data of three experiments (N = 196). Each experiment modified a distinct simulation property, namely the level of immersion, the level of personalization, and the level of behavioral realism. The analysis confirmed three factors: (1) the ownership of a virtual body, (2) the agency over a virtual body, and (3) the change in the perceived body schema. A fourth study (N = 22) further confirmed the reliability and validity of the scale and investigated the impacts of latency jitter of avatar movements presented in the simulation compared to linear latencies and a baseline. We present the final scale and further insights from the studies regarding related constructs.", "after_revision": "User embodiment is important for many virtual reality (VR) applications, for example, in the context of social interaction, therapy, training, or entertainment. However , there is no validated instrument to empirically measure the perception of embodiment , necessary to reliably evaluate this important quality of user experience (UX). To assess components of virtual embodiment in a valid, reliable, and consistent fashion , we develped a Virtual Embodiment Questionnaire (VEQ) . We reviewed previous literature to identify applicable constructs and items, and performed a confirmatory factor analysis (CFA) on the data from three experiments (N = 196). Each experiment modified a distinct simulation property, namely , the level of immersion, the level of personalization, and the level of behavioral realism. The analysis confirmed three factors: (1) ownership of a virtual body, (2) agency over a virtual body, and (3) change in the perceived body schema. A fourth study (N = 22) further confirmed the reliability and validity of the scale and investigated the impacts of latency jitter of avatar movements presented in the simulation compared to linear latencies and a baseline. We present the final scale and further insights from the studies regarding related constructs.", "edit_actions": [{"type": "R", "before": "Virtual Reality", "after": "virtual reality", "start_char_pos": 38, "end_char_pos": 53}, {"type": "R", "before": "Yet", "after": "However", "start_char_pos": 161, "end_char_pos": 164}, {"type": "R", "before": "commonly agreed-on or validated way", "after": "validated instrument", "start_char_pos": 179, "end_char_pos": 214}, {"type": "R", "before": "necessary to", "after": ", necessary to reliably", "start_char_pos": 267, "end_char_pos": 279}, {"type": "R", "before": "We present a virtual embodiment questionnaire (VEQ) to", "after": "To", "start_char_pos": 337, "end_char_pos": 391}, {"type": "A", "before": null, "after": ", we develped a Virtual Embodiment Questionnaire (VEQ)", "start_char_pos": 477, "end_char_pos": 477}, {"type": "R", "before": "concepts and items", "after": "constructs and items,", "start_char_pos": 535, "end_char_pos": 553}, {"type": "A", "before": null, "after": "(CFA)", "start_char_pos": 599, "end_char_pos": 599}, {"type": "R", "before": "of", "after": "from", "start_char_pos": 612, "end_char_pos": 614}, {"type": "A", "before": null, "after": ",", "start_char_pos": 708, "end_char_pos": 708}, {"type": "D", "before": "the", "after": null, "start_char_pos": 842, "end_char_pos": 845}, {"type": "D", "before": "the", "after": null, "start_char_pos": 879, "end_char_pos": 882}, {"type": "D", "before": "the", "after": null, "start_char_pos": 919, "end_char_pos": 922}], "sents_char_pos": [0, 160, 336, 479, 643, 799, 959, 1183]} {"doc_id": "1912.00053", "revision_depth": "1", "before_revision": "According to features of drug addiction, this paper constructs a SEIR-based SUC model to describe and predict the spread of drug addiction. Predictions are that the number of drug addictions will continue to fluctuate with a reduced amplitude and eventually stabilize. To seek the fountainhead of heroin, we identified the most likely origins of drugs in Philadelphia, PA, Cuyahoga and Hamilton, OH, Jefferson, KY, Kanawha, WV, and Bedford, VA. Based on the facts, advised concentration includes the spread of Oxycodone, Hydrocodone, Heroin and Buprenorphine. In other words, the drug transmission in the two states of Ohio and Pennsylvania require awareness. According to the propagation curve predicted by our model, the transmission of KY state is still in its early stage, while that of VA, WV is in the middle point, and OH, PA in its latter ones. Hereby the number of drug addictions in KY, OH, and VA is projected to increase in three years. For methodology, with Principal component analysis technique, 22 variables in socioeconomic data related to the continuous use of opioid drugs was filtered, where the 'Relationship' Part deserves highlight. Based on them, by using K-means algorithm, 464 counties were categorized into three basket . To combat opioid crisis, detailed action will be discussed in the sensitivity analysis section. After modeling and analytics, innovation is require to control addict and advocate anti-drug news campaigns. This part also verified the effectiveness of model when d_1<0.2; r_1,r_2,r_3<0.3; 15<\\beta_1,\\beta_2,\\beta_3<25. In other words, if such boundary exceeded, number of drug addictions may rocket and peak in a short period.", "after_revision": "According to the features of drug addiction, this paper constructs an SEIR-based SUC model to describe and predict the spread of drug addiction. Predictions are that the number of drug addictions will continue to fluctuate with reduced amplitude and eventually stabilize. To seek the fountainhead of heroin, we identified the most likely origins of drugs in Philadelphia, PA, Cuyahoga and Hamilton, OH, Jefferson, KY, Kanawha, WV, and Bedford, VA. Based on the facts, advised concentration includes the spread of Oxycodone, Hydrocodone, Heroin , and Buprenorphine. In other words, drug transmission in the two states of Ohio and Pennsylvania require awareness. According to the propagation curve predicted by our model, the transfer of KY state is still in its early stage, while that of VA, WV is in the middle point, and OH, PA in its latter ones. As a result of this, the number of drug addictions in KY, OH, and VA is projected to increase in three years. For methodology, with the Principal component analysis technique, 22 variables in socio-economic data related to the continuous use of Opioid drugs was filtered, where the 'Relationship' Part deserves a highlight. Based on them, by using the K-means algorithm, 464 counties were categorized into three baskets . To combat the opioid crisis, a specific action will discuss in the sensitivity analysis section. After modeling and analytics, innovation is required to control addicts and advocate anti-drug news campaigns. This part also verified the effectiveness of model when d_1<0.2; r_1,r_2,r_3<0.3; 15<\\beta_1,\\beta_2,\\beta_3<25. In other words, if such boundary exceeded, the number of drug addictions may rocket and peak in a short period.", "edit_actions": [{"type": "A", "before": null, "after": "the", "start_char_pos": 13, "end_char_pos": 13}, {"type": "R", "before": "a", "after": "an", "start_char_pos": 64, "end_char_pos": 65}, {"type": "D", "before": "a", "after": null, "start_char_pos": 224, "end_char_pos": 225}, {"type": "A", "before": null, "after": ",", "start_char_pos": 542, "end_char_pos": 542}, {"type": "D", "before": "the", "after": null, "start_char_pos": 578, "end_char_pos": 581}, {"type": "R", "before": "transmission", "after": "transfer", "start_char_pos": 725, "end_char_pos": 737}, {"type": "R", "before": "Hereby", "after": "As a result of this,", "start_char_pos": 855, "end_char_pos": 861}, {"type": "A", "before": null, "after": "the", "start_char_pos": 973, "end_char_pos": 973}, {"type": "R", "before": "socioeconomic", "after": "socio-economic", "start_char_pos": 1030, "end_char_pos": 1043}, {"type": "R", "before": "opioid", "after": "Opioid", "start_char_pos": 1082, "end_char_pos": 1088}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1148, "end_char_pos": 1148}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1184, "end_char_pos": 1184}, {"type": "R", "before": "basket", "after": "baskets", "start_char_pos": 1245, "end_char_pos": 1251}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1264, "end_char_pos": 1264}, {"type": "R", "before": "detailed action will be discussed", "after": "a specific action will discuss", "start_char_pos": 1280, "end_char_pos": 1313}, {"type": "R", "before": "require to control addict", "after": "required to control addicts", "start_char_pos": 1395, "end_char_pos": 1420}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1616, "end_char_pos": 1616}], "sents_char_pos": [0, 140, 269, 561, 661, 854, 950, 1159, 1253, 1350, 1459, 1524, 1541, 1572]} {"doc_id": "1912.00719", "revision_depth": "2", "before_revision": "When exploring large time-varying data sets , visual summaries are a useful tool to identify time intervals of interest for further consideration . A typical approach is to represent the data elements at each time step in a compact one-dimensional form or via a one-dimensional ordering . Such 1D representations can then be placed in temporal order along a time line. There are two main criteria to assess the quality of the resulting visual summary: spatial quality -- how well does the 1D representation capture the structure of the data at each time step, and stability -- how coherent are the 1D representations over consecutive time steps or temporal ranges? We focus on techniques that create such visual summaries for entities moving in 2D. Previous work has considered only the creation of 1D orderings, using spatial subdivisions and clustering techniques. In contrast, we propose to use actual dimensionality-reduction techniques to compute stable and spatially informative 1D representations. These more general 1D representations provide the user with additional visual cues describing the spatial distribution of the data, and naturally imply also a 1D ordering. To make dimensionality-reduction techniques suitable for visual summaries, we introduce stable variants of Principle Component Analysis, Sammon mapping, and t-SNE. Our Stable Principal Component method is explicitly parametrized for stability, allowing a trade-off between the spatial quality and stability. We conduct computational experiments that quantitatively compare the 1D orderings produced by our stable dimensionality-reduction methods to various state-of-the-art approaches using a set of well-established quality metrics that capture spatial quality and stability. We conclude that our stable algorithms outperform existing methods on stability, without sacrificing spatial quality or efficiency .", "after_revision": "The availability of devices that track moving objects has led to an explosive growth in trajectory data. When exploring the resulting large trajectory collections , visual summaries are a useful tool to identify time intervals of interest . A typical approach is to represent the spatial positions of the tracked objects at each time step via a one-dimensional ordering ; visualizations of such orderings can then be placed in temporal order along a time line. There are two main criteria to assess the quality of the resulting visual summary: spatial quality -- how well does the ordering capture the structure of the data at each time step, and stability -- how coherent are the orderings over consecutive time steps or temporal ranges? In this paper we introduce a new Stable Principal Component (SPC) method to compute such orderings, which is explicitly parameterized for stability, allowing a trade-off between the spatial quality and stability. We conduct extensive computational experiments that quantitatively compare the orderings produced by ours and other stable dimensionality-reduction methods to various state-of-the-art approaches using a set of well-established quality metrics that capture spatial quality and stability. We conclude that stable dimensionality reduction outperforms existing methods on stability, without sacrificing spatial quality or efficiency ; in particular, our new SPC method does so at a fraction of the computational costs .", "edit_actions": [{"type": "R", "before": "When exploring large time-varying data sets", "after": "The availability of devices that track moving objects has led to an explosive growth in trajectory data. When exploring the resulting large trajectory collections", "start_char_pos": 0, "end_char_pos": 43}, {"type": "D", "before": "for further consideration", "after": null, "start_char_pos": 120, "end_char_pos": 145}, {"type": "R", "before": "data elements", "after": "spatial positions of the tracked objects", "start_char_pos": 187, "end_char_pos": 200}, {"type": "D", "before": "in a compact one-dimensional form or", "after": null, "start_char_pos": 219, "end_char_pos": 255}, {"type": "R", "before": ". Such 1D representations", "after": "; visualizations of such orderings", "start_char_pos": 287, "end_char_pos": 312}, {"type": "R", "before": "1D representation", "after": "ordering", "start_char_pos": 489, "end_char_pos": 506}, {"type": "R", "before": "1D representations", "after": "orderings", "start_char_pos": 598, "end_char_pos": 616}, {"type": "R", "before": "We focus on techniques that create such visual summaries for entities moving in 2D. Previous work has considered only the creation of 1D orderings, using spatial subdivisions and clustering techniques. In contrast, we propose to use actual dimensionality-reduction techniques to compute stable and spatially informative 1D representations. These more general 1D representations provide the user with additional visual cues describing the spatial distribution of the data, and naturally imply also a 1D ordering. To make dimensionality-reduction techniques suitable for visual summaries, we introduce stable variants of Principle Component Analysis, Sammon mapping, and t-SNE. Our", "after": "In this paper we introduce a new", "start_char_pos": 665, "end_char_pos": 1344}, {"type": "R", "before": "method is explicitly parametrized", "after": "(SPC) method to compute such orderings, which is explicitly parameterized", "start_char_pos": 1372, "end_char_pos": 1405}, {"type": "A", "before": null, "after": "extensive", "start_char_pos": 1496, "end_char_pos": 1496}, {"type": "D", "before": "1D", "after": null, "start_char_pos": 1555, "end_char_pos": 1557}, {"type": "R", "before": "our", "after": "ours and other", "start_char_pos": 1580, "end_char_pos": 1583}, {"type": "R", "before": "our stable algorithms outperform", "after": "stable dimensionality reduction outperforms", "start_char_pos": 1772, "end_char_pos": 1804}, {"type": "A", "before": null, "after": "; in particular, our new SPC method does so at a fraction of the computational costs", "start_char_pos": 1886, "end_char_pos": 1886}], "sents_char_pos": [0, 147, 288, 368, 664, 748, 866, 1004, 1176, 1484, 1754]} {"doc_id": "1912.00921", "revision_depth": "1", "before_revision": "Through the recent results that we present , we wish to highlight the various time-scales on which the demographic dynamics and the natural selection can be retrieved from individual-based models. Although these results are by no means exhaustive, both on the mathematical and the biological level, they are very complementary . Indeed, they provide a viewpoint for the most classical time-scales of interest. They namely encompass the time-scale of the life of a single individual, the longer one on which a population can be characterized , the ecological one and at least four nested ones on which selection occurs. The limiting behavior is generally deterministic. Yet, since the selection acts on the alea in the history of individuals, the probabilities are shown to be a key element of understanding . Besides, the scarcity of mutations fixing in the poplation also induces some stochasticity .", "after_revision": "This article is a presentation of specific recent results describing scaling limits of individual-based models. Thanks to them , we wish to relate the time-scales typical of demographic dynamics and natural selection to the parameters of the individual-based models. Although these results are by no means exhaustive, both on the mathematical and the biological level, they complement each other . Indeed, they provide a viewpoint for many classical time-scales . Namely, they encompass the timescale typical of the life-expectancy of a single individual, the longer one wherein a population can be characterized through its demographic dynamics, and at least four interconnected ones wherein selection occurs. The limiting behavior is generally deterministic. Yet, since there are selective effects on randomness in the history of lineages, probability theory is shown to be a key factor in understanding the results . Besides, randomness can be maintained in the limiting dynamics, for instance to model rare mutations fixing in the population .", "edit_actions": [{"type": "R", "before": "Through the recent results that we present", "after": "This article is a presentation of specific recent results describing scaling limits of individual-based models. Thanks to them", "start_char_pos": 0, "end_char_pos": 42}, {"type": "R", "before": "highlight the various", "after": "relate the", "start_char_pos": 56, "end_char_pos": 77}, {"type": "R", "before": "on which the", "after": "typical of", "start_char_pos": 90, "end_char_pos": 102}, {"type": "R", "before": "the natural selection can be retrieved from", "after": "natural selection to the parameters of the", "start_char_pos": 128, "end_char_pos": 171}, {"type": "R", "before": "are very complementary", "after": "complement each other", "start_char_pos": 304, "end_char_pos": 326}, {"type": "R", "before": "the most", "after": "many", "start_char_pos": 366, "end_char_pos": 374}, {"type": "R", "before": "of interest. They namely encompass the time-scale of the life", "after": ". Namely, they encompass the timescale typical of the life-expectancy", "start_char_pos": 397, "end_char_pos": 458}, {"type": "R", "before": "on which", "after": "wherein", "start_char_pos": 498, "end_char_pos": 506}, {"type": "R", "before": ", the ecological one", "after": "through its demographic dynamics,", "start_char_pos": 541, "end_char_pos": 561}, {"type": "R", "before": "nested ones on which", "after": "interconnected ones wherein", "start_char_pos": 580, "end_char_pos": 600}, {"type": "R", "before": "the selection acts on the alea", "after": "there are selective effects on randomness", "start_char_pos": 680, "end_char_pos": 710}, {"type": "R", "before": "individuals, the probabilities are", "after": "lineages, probability theory is", "start_char_pos": 729, "end_char_pos": 763}, {"type": "R", "before": "element of understanding", "after": "factor in understanding the results", "start_char_pos": 782, "end_char_pos": 806}, {"type": "R", "before": "the scarcity of", "after": "randomness can be maintained in the limiting dynamics, for instance to model rare", "start_char_pos": 818, "end_char_pos": 833}, {"type": "R", "before": "poplation also induces some stochasticity", "after": "population", "start_char_pos": 858, "end_char_pos": 899}], "sents_char_pos": [0, 196, 328, 409, 618, 668, 808]} {"doc_id": "1912.04774", "revision_depth": "1", "before_revision": "A concern central to the economics of privacy is that firms may use consumer data to price discriminate. A common response is that consumers should have control over their data and the ability to choose howfirms access it . Since firms draw inferences based on both the data seen as well as the consumer's disclosure choices, the strategic implications of this proposal are unclear. We investigate whether such measures improve consumer welfare in monopolistic and competitive environments . We find that consumer control can guarantee gains for every consumer type relative to both perfect price discrimination and no personalized pricing. This result is driven by two ideas. First, consumers can use disclosure to amplify competition between firms . Second, consumers can share information that induces a seller---even a monopolist---to make price concessions. Furthermore, whether consumer control improves consumer surplus depends on both the technology of disclosure and the competitivenessof the marketplace. In a competitive market, simple disclosure technologies such as \"track / do-not-track\" suffice for guaranteeing gains in consumer welfare. However, in a monopolistic market, welfare gains require richer forms of disclosure technology whereby consumers can decide how much information they would like to convey .", "after_revision": "Central to privacy concerns is that firms may use consumer data to price discriminate. A common policy response is that consumers should be given control over which firms access their data and how . Since firms learn about a consumer's preferences based on the data seen and the consumer's disclosure choices, the equilibrium implications of consumer control are unclear. We study whether such measures improve consumer welfare in monopolistic and competitive markets . We find that consumer control can improve consumer welfare relative to both perfect price discrimination and no personalized pricing. First, consumers can use disclosure to amplify competitive forces . Second, consumers can disclose information to induce even a monopolist to lower prices. Whether consumer control improves welfare depends on the disclosure technology and market competitiveness. Simple disclosure technologies suffice in competitive markets. When facing a monopolist, a consumer needs partial disclosure possibilities to obtain any welfare gains .", "edit_actions": [{"type": "R", "before": "A concern central to the economics of privacy", "after": "Central to privacy concerns", "start_char_pos": 0, "end_char_pos": 45}, {"type": "A", "before": null, "after": "policy", "start_char_pos": 114, "end_char_pos": 114}, {"type": "R", "before": "have control over", "after": "be given control over which firms access", "start_char_pos": 149, "end_char_pos": 166}, {"type": "R", "before": "the ability to choose howfirms access it", "after": "how", "start_char_pos": 182, "end_char_pos": 222}, {"type": "R", "before": "draw inferences based on both", "after": "learn about a consumer's preferences based on", "start_char_pos": 237, "end_char_pos": 266}, {"type": "R", "before": "as well as", "after": "and", "start_char_pos": 281, "end_char_pos": 291}, {"type": "R", "before": "strategic implications of this proposal", "after": "equilibrium implications of consumer control", "start_char_pos": 331, "end_char_pos": 370}, {"type": "R", "before": "investigate", "after": "study", "start_char_pos": 387, "end_char_pos": 398}, {"type": "R", "before": "environments", "after": "markets", "start_char_pos": 478, "end_char_pos": 490}, {"type": "R", "before": "guarantee gains for every consumer type", "after": "improve consumer welfare", "start_char_pos": 527, "end_char_pos": 566}, {"type": "D", "before": "This result is driven by two ideas.", "after": null, "start_char_pos": 642, "end_char_pos": 677}, {"type": "R", "before": "competition between firms", "after": "competitive forces", "start_char_pos": 725, "end_char_pos": 750}, {"type": "R", "before": "share information that induces a seller---even a monopolist---to make price concessions. Furthermore, whether", "after": "disclose information to induce even a monopolist to lower prices. Whether", "start_char_pos": 775, "end_char_pos": 884}, {"type": "R", "before": "consumer surplus depends on both the technology of disclosure and the competitivenessof the marketplace. In a competitive market, simple disclosure technologies such as \"track / do-not-track\" suffice for guaranteeing gains in consumer welfare. However, in a monopolistic market, welfare gains require richer forms of disclosure technology whereby consumers can decide how much information they would like to convey", "after": "welfare depends on the disclosure technology and market competitiveness. Simple disclosure technologies suffice in competitive markets. When facing a monopolist, a consumer needs partial disclosure possibilities to obtain any welfare gains", "start_char_pos": 911, "end_char_pos": 1325}], "sents_char_pos": [0, 104, 383, 492, 641, 677, 863, 1015, 1154]} {"doc_id": "1912.09765", "revision_depth": "3", "before_revision": "Availability codes have recently been proposed to facilitate efficient retrieval of frequently accessed (hot) data objects in distributed storage systems. This paper presents techniques for analyzing the download time of systematic availability codes considering the Fork-Join scheme for data access. Specifically, we consider the setup in which requests arrive for downloading individual data objects, and each request is replicated (forked) to the systematic server containing the object and all of its recovery groups. For low-traffic regime , when there is at most one request in the system, we compute the download time in closed-form and compare it across systems with availability, maximum distance separable (MDS), and replication codes . We demonstrate that availability codes can reduce download time in some settings , but are not always optimal. When the low-traffic assumption does not hold, system consists of multiple inter-dependent Fork-Join queues, which makes exact analysis intractable due to state space explosion. Here , we present upper and lower bounds on the download time, and an M/G/1 queue approximation for several special cases of interest. Via extensive numerical simulations, we evaluate our bounds , and demonstrate that the M/G/1 queue approximation has a high degree of accuracy.", "after_revision": "The paper presents techniques for analyzing the expected download time in distributed storage systems that employ systematic availability codes . These codes provide access to hot data through the systematic server containing the object and multiple recovery groups. When a request for an object is received, it can be replicated (forked) to the systematic server and all recovery groups. We first consider the low-traffic regime and present the close-form expression for the download time . By comparison across systems with availability, maximum distance separable (MDS), and replication codes , we demonstrate that availability codes can reduce download time in some settings but are not always optimal. In the high-traffic regime, the system consists of multiple inter-dependent Fork-Join queues, making exact analysis intractable . Accordingly , we present upper and lower bounds on the download time, and an M/G/1 queue approximation for several cases of interest. Via extensive numerical simulations, we evaluate our bounds and demonstrate that the M/G/1 queue approximation has a high degree of accuracy.", "edit_actions": [{"type": "R", "before": "Availability codes have recently been proposed to facilitate efficient retrieval of frequently accessed (hot) data objects in distributed storage systems. This", "after": "The", "start_char_pos": 0, "end_char_pos": 159}, {"type": "R", "before": "download time of", "after": "expected download time in distributed storage systems that employ", "start_char_pos": 204, "end_char_pos": 220}, {"type": "R", "before": "considering the Fork-Join scheme for data access. Specifically, we consider the setup in which requests arrive for downloading individual data objects, and each request is", "after": ". These codes provide access to hot data through the systematic server containing the object and multiple recovery groups. When a request for an object is received, it can be", "start_char_pos": 251, "end_char_pos": 422}, {"type": "R", "before": "containing the object and all of its", "after": "and all", "start_char_pos": 468, "end_char_pos": 504}, {"type": "R", "before": "For", "after": "We first consider the", "start_char_pos": 522, "end_char_pos": 525}, {"type": "R", "before": ", when there is at most one request in the system, we compute", "after": "and present the close-form expression for", "start_char_pos": 545, "end_char_pos": 606}, {"type": "R", "before": "in closed-form and compare it", "after": ". By comparison", "start_char_pos": 625, "end_char_pos": 654}, {"type": "R", "before": ". We", "after": ", we", "start_char_pos": 745, "end_char_pos": 749}, {"type": "D", "before": ",", "after": null, "start_char_pos": 828, "end_char_pos": 829}, {"type": "R", "before": "When the low-traffic assumption does not hold,", "after": "In the high-traffic regime, the", "start_char_pos": 858, "end_char_pos": 904}, {"type": "R", "before": "which makes", "after": "making", "start_char_pos": 967, "end_char_pos": 978}, {"type": "R", "before": "due to state space explosion. Here", "after": ". Accordingly", "start_char_pos": 1006, "end_char_pos": 1040}, {"type": "D", "before": "special", "after": null, "start_char_pos": 1144, "end_char_pos": 1151}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1231, "end_char_pos": 1232}], "sents_char_pos": [0, 154, 300, 521, 746, 857, 1035, 1170]} {"doc_id": "1912.09816", "revision_depth": "1", "before_revision": "Community detection is a fundamental problem in social network analysis consisting , roughly speaking, in dividing social actors ( modelled as nodes in a social graph) with certain social connections ( modelled as edges in the social graph) into densely knitted and highly related groups with each group well separated from the others. Classical approaches for community detection usually deal only with the structure of the network and ignore features of the nodes , although major real-world networks provide additional actors' information such as age, gender, interests, etc. , traditionally called node attributes. It is known that the attributes may clarify and enrich the knowledge about the actors and give sense to the detected communities. This has led to a relatively novel direction in community detection --- constructing algorithms that use both the structure and the attributes of the network (modelled already via a node-attributed graph) to yield more informative and qualitative results. During the last decade many methods based on different ideas and techniques have appearedin this direction . Although there exist some partial overviews of them, a recent survey is a necessity as the growing number of the methods may cause uncertainty in practice. In this paper we aim at clarifying the overall situation by proposing a clear classification of the methods and providing a comprehensive survey of the available results . We not only group and analyse the corresponding methods but also focus on practical aspects, including the information which methods outperform others and which datasets and quality measures are used for evaluation .", "after_revision": "Community detection is a fundamental problem in social network analysis consisting in unsupervised dividing social actors ( nodes in a social graph) with certain social connections ( edges in a social graph) into densely knitted and highly related groups with each group well separated from the others. Classical approaches for community detection usually deal only with network structure and ignore features of its nodes (called node attributes), although many real-world social networks provide additional actors' information such as interests. It is believed that the attributes may clarify and enrich the knowledge about the actors and give sense to the communities. This belief has motivated the progress in developing community detection methods that use both the structure and the attributes of network (i.e. deal with a node-attributed graph) to yield more informative and qualitative results. During the last decade many such methods based on different ideas have appeared . Although there exist partial overviews of them, a recent survey is a necessity as the growing number of the methods may cause repetitions in methodology and uncertainty in practice. In this paper we aim at describing and clarifying the overall situation in the field of community detection in node-attributed social networks. Namely, we perform an exhaustive search of known methods and propose a classification of them based on when and how structure and attributes are fused . We not only give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available information which methods outperform others and which datasets and quality measures are used for their evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future .", "edit_actions": [{"type": "R", "before": ", roughly speaking, in", "after": "in unsupervised", "start_char_pos": 83, "end_char_pos": 105}, {"type": "D", "before": "modelled as", "after": null, "start_char_pos": 131, "end_char_pos": 142}, {"type": "R", "before": "modelled as edges in the", "after": "edges in a", "start_char_pos": 202, "end_char_pos": 226}, {"type": "R", "before": "the structure of the network", "after": "network structure", "start_char_pos": 404, "end_char_pos": 432}, {"type": "R", "before": "the nodes , although major", "after": "its nodes (called node attributes), although many", "start_char_pos": 456, "end_char_pos": 482}, {"type": "A", "before": null, "after": "social", "start_char_pos": 494, "end_char_pos": 494}, {"type": "R", "before": "age, gender, interests, etc. , traditionally called node attributes. It is known", "after": "interests. It is believed", "start_char_pos": 551, "end_char_pos": 631}, {"type": "D", "before": "detected", "after": null, "start_char_pos": 728, "end_char_pos": 736}, {"type": "R", "before": "has led to a relatively novel direction in community detection --- constructing algorithms", "after": "belief has motivated the progress in developing community detection methods", "start_char_pos": 755, "end_char_pos": 845}, {"type": "R", "before": "the network (modelled already via", "after": "network (i.e. deal with", "start_char_pos": 896, "end_char_pos": 929}, {"type": "A", "before": null, "after": "such", "start_char_pos": 1034, "end_char_pos": 1034}, {"type": "R", "before": "and techniques have appearedin this direction", "after": "have appeared", "start_char_pos": 1068, "end_char_pos": 1113}, {"type": "D", "before": "some", "after": null, "start_char_pos": 1137, "end_char_pos": 1141}, {"type": "A", "before": null, "after": "repetitions in methodology and", "start_char_pos": 1247, "end_char_pos": 1247}, {"type": "A", "before": null, "after": "describing and", "start_char_pos": 1297, "end_char_pos": 1297}, {"type": "R", "before": "by proposing a clear classification of the methods and providing a comprehensive survey of the available results", "after": "in the field of community detection in node-attributed social networks. Namely, we perform an exhaustive search of known methods and propose a classification of them based on when and how structure and attributes are fused", "start_char_pos": 1331, "end_char_pos": 1443}, {"type": "R", "before": "group and analyse the corresponding methods but also focus on practical aspects, including the", "after": "give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available", "start_char_pos": 1458, "end_char_pos": 1552}, {"type": "R", "before": "evaluation", "after": "their evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future", "start_char_pos": 1650, "end_char_pos": 1660}], "sents_char_pos": [0, 335, 619, 749, 1005, 1115, 1272, 1445]} {"doc_id": "1912.10514", "revision_depth": "1", "before_revision": "An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data. This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation. This work presents a simplified approach of differentiating between the two data using pretraining and finetuning . The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the natural data. Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation . While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "after_revision": "An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data. Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that under-performed using standard back-translation. This work presents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data . The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT . While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU . The approach reached the best scores in less training time than the standard and tagged back-translation approaches .", "edit_actions": [{"type": "A", "before": null, "after": "The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data.", "start_char_pos": 201, "end_char_pos": 201}, {"type": "R", "before": "natural data. This improves", "after": "authentic data, improving", "start_char_pos": 307, "end_char_pos": 334}, {"type": "R", "before": "enables", "after": "enabling", "start_char_pos": 370, "end_char_pos": 377}, {"type": "R", "before": "underperformed", "after": "under-performed", "start_char_pos": 439, "end_char_pos": 453}, {"type": "R", "before": "a simplified", "after": "pre-training and fine-tuning as a simplified but more effective", "start_char_pos": 506, "end_char_pos": 518}, {"type": "D", "before": "using pretraining and finetuning", "after": null, "start_char_pos": 568, "end_char_pos": 600}, {"type": "R", "before": "finetunes", "after": "fine-tunes", "start_char_pos": 689, "end_char_pos": 698}, {"type": "R", "before": "natural data. Preliminary experiments", "after": "authentic data. Experiments", "start_char_pos": 709, "end_char_pos": 746}, {"type": "R", "before": "continuously outperform the tagging approach", "after": "outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively", "start_char_pos": 774, "end_char_pos": 818}, {"type": "R", "before": "neural machine translation", "after": "NMT", "start_char_pos": 854, "end_char_pos": 880}, {"type": "R", "before": "approach outperformed the", "after": "technique outperformed", "start_char_pos": 954, "end_char_pos": 979}, {"type": "R", "before": "approach by an average of", "after": "by", "start_char_pos": 1004, "end_char_pos": 1029}, {"type": "A", "before": null, "after": ". The approach reached the best scores in less training time than the standard and tagged back-translation approaches", "start_char_pos": 1039, "end_char_pos": 1039}], "sents_char_pos": [0, 200, 320, 486, 602, 722, 882]} {"doc_id": "1912.11870", "revision_depth": "1", "before_revision": "The effects of positive and negative hydrationof counterions (Na^{+ and Cs^{+} ) incorporated into the hydration shell of the DNA double helix have been studied using molecular dynamics approach . The results show that the dynamics of the hydration shell of counterions depends on region of ion localization around the macromolecule. The longest residence times have been observed for water molecules near the counterionsthat are localized in the minor groove of the double helix : about 30 ps in the case of Na^{+ K^{+} and Cs^{+} counterions. In the major groove and outside the double helix it is essentially lower. The counterions constrain water molecules too strong, and as the result the effect of negative hydration for K^{+} and Cs^{+} counterions was not observed in the simulations. The analysis show that the effects of counterion hydration may be described better by using the water models with lower dipole moments.", "after_revision": "The DNA double helix is a polyanionic macromolecule that in water solutions is neutralized by metal ions (counterions). The property of the counterions to stabilize the water network (positive hydration) or to make it friable (negative hydration) is important in terms of the physical mechanisms of stabilization of the DNA double helix. In the present research, the effects of positive hydration of Na^{+ and Cs^{+} counterions, incorporated into the hydration shell of the DNA double helix have been studied using molecular dynamics simulations . The results have shown that the dynamics of the hydration shell of counterions depends on region of the double helix: minor groove, major groove, and outside the macromolecule. The longest average residence time has been observed for water molecules contacting with the counterions, localized in the minor groove of the double helix (about 50 ps for Na^{+ K^{+} and Cs^{+} ). The estimated potentials of mean force for the hydration shells of the counterions show that the water molecules are constrained too strong, and , consequently, the effect of negative hydration for K^{+} and Cs^{+} counterions has not been observed in the simulations. The analysis has shown that the effects of counterion hydration can be described more accurately with water models having lower dipole moments.", "edit_actions": [{"type": "R", "before": "effects of positive and negative hydrationof counterions (Na^{+", "after": "DNA double helix is a polyanionic macromolecule that in water solutions is neutralized by metal ions (counterions). The property of the counterions to stabilize the water network (positive hydration) or to make it friable (negative hydration) is important in terms of the physical mechanisms of stabilization of the DNA double helix. In the present research, the effects of positive hydration of Na^{+", "start_char_pos": 4, "end_char_pos": 67}, {"type": "R", "before": ")", "after": "counterions,", "start_char_pos": 79, "end_char_pos": 80}, {"type": "R", "before": "approach", "after": "simulations", "start_char_pos": 186, "end_char_pos": 194}, {"type": "R", "before": "show", "after": "have shown", "start_char_pos": 209, "end_char_pos": 213}, {"type": "R", "before": "ion localization around the", "after": "the double helix: minor groove, major groove, and outside the", "start_char_pos": 291, "end_char_pos": 318}, {"type": "R", "before": "residence times have", "after": "average residence time has", "start_char_pos": 346, "end_char_pos": 366}, {"type": "R", "before": "near the counterionsthat are", "after": "contacting with the counterions,", "start_char_pos": 401, "end_char_pos": 429}, {"type": "R", "before": ": about 30 ps in the case of Na^{+", "after": "(about 50 ps for Na^{+", "start_char_pos": 480, "end_char_pos": 514}, {"type": "R", "before": "counterions. In the major groove and outside the double helix it is essentially lower. The counterions constrain water molecules", "after": "). The estimated potentials of mean force for the hydration shells of the counterions show that the water molecules are constrained", "start_char_pos": 532, "end_char_pos": 660}, {"type": "R", "before": "as the result the", "after": ", consequently, the", "start_char_pos": 677, "end_char_pos": 694}, {"type": "R", "before": "was not", "after": "has not been", "start_char_pos": 757, "end_char_pos": 764}, {"type": "R", "before": "show", "after": "has shown", "start_char_pos": 807, "end_char_pos": 811}, {"type": "R", "before": "may be described better by using the water models with", "after": "can be described more accurately with water models having", "start_char_pos": 853, "end_char_pos": 907}], "sents_char_pos": [0, 196, 333, 544, 618, 793]} {"doc_id": "2001.01266", "revision_depth": "2", "before_revision": "Using extremely large number of processing elements in computing systems leads to unexpected phenomena, such as different efficiencies of the same system for different tasks, that cannot be explained in the frame of the classical computing paradigm. The introduced simple non-technical model enables to set up a frame and formalism needed to explain the unexpected experiences around supercomputing. The paper shows that the degradation of the efficiency of the parallelized sequential system is a natural consequence of the computing paradigm, rather than an engineering imperfectness. The workload is greatly responsible for wasting the energy as well as limiting the size and the type of tasks the supercomputers can run . Case studies provide insight how the different contributions compete for dominating the resulting payload performance of the computing system, and how enhancing the technology made the computing+communication the dominating contribution in defining the efficiency of supercomputers. The model also enables to derive predictions about the supercomputer performance limitations for the near future as well as provides hints for enhancing the supercomputer components. The phenomena show interesting parallels with the phenomena experienced in science more than a century ago and through their studying a modern science was developed.", "after_revision": "Using an extremely large number of processing elements in computing systems leads to unexpected phenomena, such as different efficiencies of the same system for different tasks, that cannot be explained in the frame of classical computing paradigm. The simple non-technical (but considering the temporal behavior of the components) model, introduced here, enables us to set up a frame and formalism , needed to explain those unexpected experiences around supercomputing. Introducing temporal behavior into computer science also explains why only the extreme scale computing enabled us to reveal the experienced limitations. The paper shows , that degradation of efficiency of parallelized sequential systems is a natural consequence of the classical computing paradigm, instead of being an engineering imperfectness. The workload , that supercomputers run, is much responsible for wasting energy, as well as limiting the size and type of tasks . Case studies provide insight , how different contributions compete for dominating the resulting payload performance of a computing system, and how enhancing the interconnection technology made computing+communication to dominate in defining the efficiency of supercomputers. Our model also enables to derive predictions about supercomputer performance limitations for the near future , as well as it provides hints for enhancing supercomputer components. Phenomena experienced in large-scale computing show interesting parallels with phenomena experienced in science , more than a century ago , and through their studying a modern science was developed.", "edit_actions": [{"type": "A", "before": null, "after": "an", "start_char_pos": 6, "end_char_pos": 6}, {"type": "D", "before": "the", "after": null, "start_char_pos": 217, "end_char_pos": 220}, {"type": "D", "before": "introduced", "after": null, "start_char_pos": 255, "end_char_pos": 265}, {"type": "R", "before": "model enables", "after": "(but considering the temporal behavior of the components) model, introduced here, enables us", "start_char_pos": 287, "end_char_pos": 300}, {"type": "A", "before": null, "after": ",", "start_char_pos": 333, "end_char_pos": 333}, {"type": "R", "before": "the", "after": "those", "start_char_pos": 352, "end_char_pos": 355}, {"type": "A", "before": null, "after": "Introducing temporal behavior into computer science also explains why only the extreme scale computing enabled us to reveal the experienced limitations.", "start_char_pos": 402, "end_char_pos": 402}, {"type": "R", "before": "that the degradation of the efficiency of the parallelized sequential system", "after": ", that degradation of efficiency of parallelized sequential systems", "start_char_pos": 419, "end_char_pos": 495}, {"type": "A", "before": null, "after": "classical", "start_char_pos": 528, "end_char_pos": 528}, {"type": "R", "before": "rather than", "after": "instead of being", "start_char_pos": 549, "end_char_pos": 560}, {"type": "R", "before": "is greatly", "after": ", that supercomputers run, is much", "start_char_pos": 604, "end_char_pos": 614}, {"type": "R", "before": "the energy", "after": "energy,", "start_char_pos": 639, "end_char_pos": 649}, {"type": "D", "before": "the", "after": null, "start_char_pos": 683, "end_char_pos": 686}, {"type": "D", "before": "the supercomputers can run", "after": null, "start_char_pos": 701, "end_char_pos": 727}, {"type": "R", "before": "how the", "after": ", how", "start_char_pos": 759, "end_char_pos": 766}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 851, "end_char_pos": 854}, {"type": "R", "before": "technology made the", "after": "interconnection technology made", "start_char_pos": 895, "end_char_pos": 914}, {"type": "R", "before": "the dominating contribution", "after": "to dominate", "start_char_pos": 939, "end_char_pos": 966}, {"type": "R", "before": "The", "after": "Our", "start_char_pos": 1013, "end_char_pos": 1016}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1064, "end_char_pos": 1067}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1126, "end_char_pos": 1126}, {"type": "A", "before": null, "after": "it", "start_char_pos": 1138, "end_char_pos": 1138}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1168, "end_char_pos": 1171}, {"type": "R", "before": "The phenomena", "after": "Phenomena experienced in large-scale computing", "start_char_pos": 1198, "end_char_pos": 1211}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1244, "end_char_pos": 1247}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1281, "end_char_pos": 1281}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1306, "end_char_pos": 1306}], "sents_char_pos": [0, 250, 401, 590, 1012, 1197]} {"doc_id": "2001.01340", "revision_depth": "1", "before_revision": "This study focuses on establishing an external human-machine interface (eHMI) for automated vehicles (AVs) that can clearly and quickly convey driving intentions to pedestrians, thus improving the acceptability of AVs. Frist of all, this study seeks to evaluate whether pedestrians clearly receive the information from the AV. This paper proposes a hypothesis based on a decision-making model of the pedestrian. If a pedestrian does not accurately understand the driving intentions of AVs, then his/her gaze duration at AVs will increase . A pedestrian--vehicle interaction experiment was designed to verify the proposed hypothesis. An AV was used for interacting with pedestrians but whether the AV stops or not during interactions that was controlled by the experimenter. The gaze data of pedestrians and their subjective evaluations of the AV's driving intentions were observed . The experimental results supported the hypothesis , i.e., there was a correlation between the participants ' gaze duration at an AV and their understanding of the AV's driving intention . Moreover, the gaze duration of most participants at a manually driving vehicle was shorter than that at an AV. Besides, we proposed two recommendations to the designers of eHMI : (1) when a pedestrian is engaged in an interaction with the AV, the driving intentions of the AV should be provided; (2) if the pedestrian still gazes at the AV after the AV displays its driving intentions, the AV should provide more clear information about its driving intentions.", "after_revision": "Interactions between pedestrians and automated vehicles (AVs) will increase significantly with the popularity of AV. However, pedestrians often have not enough trust on the AVs , particularly when they are confused about an AV's intention in a interaction. This study seeks to evaluate if pedestrians clearly understand the driving intentions of AVs in interactions and presents experimental research on the relationship between gaze behaviors of pedestrians and their understanding of the intentions of the AV. The hypothesis investigated in this study was that the less the pedestrian understands the driving intentions of the AV, the longer the duration of their gazing behavior will be . A pedestrian--vehicle interaction experiment was designed to verify the proposed hypothesis. A robotic wheelchair was used as the manual driving vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze data and their subjective evaluation of the driving intentions were recorded . The experimental results supported our hypothesis as there was a negative correlation between the pedestrians ' gaze duration on the AV and their understanding of the driving intentions of the AV . Moreover, the gaze duration of most of the pedestrians on the MV was shorter than that on an AV. Therefore, we conclude with two recommendations to designers of external human-machine interfaces (eHMI) : (1) when a pedestrian is engaged in an interaction with an AV, the driving intentions of the AV should be provided; (2) if the pedestrian still gazes at the AV after the AV displays its driving intentions, the AV should provide clearer information about its driving intentions.", "edit_actions": [{"type": "R", "before": "This study focuses on establishing an external human-machine interface (eHMI) for", "after": "Interactions between pedestrians and", "start_char_pos": 0, "end_char_pos": 81}, {"type": "R", "before": "that can clearly and quickly convey driving intentions to pedestrians, thus improving the acceptability of AVs. Frist of all, this", "after": "will increase significantly with the popularity of AV. However, pedestrians often have not enough trust on the AVs , particularly when they are confused about an AV's intention in a interaction. This", "start_char_pos": 107, "end_char_pos": 237}, {"type": "R", "before": "whether pedestrians clearly receive the information from the AV. This paper proposes a hypothesis based on a decision-making model of the pedestrian. If a pedestrian does not accurately understand", "after": "if pedestrians clearly understand the driving intentions of AVs in interactions and presents experimental research on the relationship between gaze behaviors of pedestrians and their understanding of the intentions of the AV. The hypothesis investigated in this study was that the less the pedestrian understands", "start_char_pos": 262, "end_char_pos": 458}, {"type": "R", "before": "AVs, then his/her gaze duration at AVs will increase", "after": "the AV, the longer the duration of their gazing behavior will be", "start_char_pos": 485, "end_char_pos": 537}, {"type": "R", "before": "An AV was used", "after": "A robotic wheelchair was used as the manual driving vehicle (MV) and AV", "start_char_pos": 633, "end_char_pos": 647}, {"type": "R", "before": "but whether the AV stops or not during interactions that was controlled by the experimenter. The gaze data of pedestrians", "after": "while pedestrians' gaze data", "start_char_pos": 681, "end_char_pos": 802}, {"type": "R", "before": "evaluations of the AV's", "after": "evaluation of the", "start_char_pos": 824, "end_char_pos": 847}, {"type": "R", "before": "observed", "after": "recorded", "start_char_pos": 872, "end_char_pos": 880}, {"type": "R", "before": "the hypothesis , i.e.,", "after": "our hypothesis as", "start_char_pos": 918, "end_char_pos": 940}, {"type": "A", "before": null, "after": "negative", "start_char_pos": 953, "end_char_pos": 953}, {"type": "R", "before": "participants", "after": "pedestrians", "start_char_pos": 978, "end_char_pos": 990}, {"type": "R", "before": "at an", "after": "on the", "start_char_pos": 1007, "end_char_pos": 1012}, {"type": "R", "before": "AV's driving intention", "after": "driving intentions of the AV", "start_char_pos": 1047, "end_char_pos": 1069}, {"type": "R", "before": "participants at a manually driving vehicle", "after": "of the pedestrians on the MV", "start_char_pos": 1108, "end_char_pos": 1150}, {"type": "R", "before": "at", "after": "on", "start_char_pos": 1173, "end_char_pos": 1175}, {"type": "R", "before": "Besides, we proposed", "after": "Therefore, we conclude with", "start_char_pos": 1183, "end_char_pos": 1203}, {"type": "R", "before": "the designers of eHMI", "after": "designers of external human-machine interfaces (eHMI)", "start_char_pos": 1227, "end_char_pos": 1248}, {"type": "R", "before": "the", "after": "an", "start_char_pos": 1307, "end_char_pos": 1310}, {"type": "R", "before": "more clear", "after": "clearer", "start_char_pos": 1480, "end_char_pos": 1490}], "sents_char_pos": [0, 218, 326, 411, 539, 632, 773, 882, 1071, 1182, 1367]} {"doc_id": "2001.03197", "revision_depth": "1", "before_revision": "The emergence of autonomous vehicles ( AVs) is anticipated to influence the public transportation (PT) system . Many possible relationships between AV and PT are proposed depending on the policy and institution, where competition and cooperation are two main categories. This paper focuses on the former in a hypothetical scenario- \"if both AV and PT operators were only profit-oriented . \" We aim to quantitatively evaluate the system performance (e.g. level of service, operators' financial viability, transport efficiency) when AV and PT are profit-oriented competitors with dynamic adjustable supply strategies under certain policy constraints. We assume AV can adjust the fleetsize and PT can adjust the headway. Service fare and bus routes are fixed. The competition process is analyzed through an agent-based simulation platform, which incorporates a proposed heuristic dynamic supply updating algorithm (HDSUA). The first-mile scenario in Singapore Tampinesarea is selected as the case study, where only bus is considered for PT system. We found that when AV and bus operators are given the flexibility to adjust supply, both of them will re-distribute their supply spatially and temporally, leading to higher profits . In temporal dimension, both AV and bus will concentrate their supplies in morning and evening peak hours, and reduce the supplies in off-peak hours. The competition between AV and PT decreases passengers' travel time but increase their travel cost. The generalized travel cost is still reduced when counting the value of time. The bus supply adjustment can increase the bus average load and reduce total passenger car equivalent (PCE), which is good for transport efficiency and sustainability. But the AV supply adjustment shows the opposite effect. Overall, the competition does not necessarily bring out loss-gain results. A win-win outcome is also possible under certain policy interventions .", "after_revision": "The emerging autonomous vehicles ( AV) can either supplement the public transportation (PT) system or be a competitor with it. This paper focuses on this competition in a hypothetical scenario-- \"if both AV and PT operators are profit-oriented , \" and uses an ABM to quantitatively evaluate the system performance in this competition from the perspectives of four stakeholders--AV operator, PT operator, passengers, and public authority. In our model, AV operator updates its supply by changing fleet sizes while PT by adjusting headways, and both use heuristic approaches to update supply in order to increase profits. We implement the model in the first-mile scenario in Tampines. In four regulation scenarios--two by two combinations regarding whether AV and PT are allowed to change supplies--we find that since AV can release the bus operator from low-demand routes, the competition can lead to higher profits of both, and higher system efficiency, simultaneously, rather than a one-sided loss-gain result. For PT, after supply updates, spatially the services are concentrated to short feeder routes directly to the subway station, and temporally concentrated to peak hours. For passengers, the competition reduces their travel time but increases their travel costs. Nonetheless, the generalized travel cost is still reduced when counting the value of time. For system efficiency and sustainability, bus supply adjustment can increase the bus average load and reduce total PCE, while the AV supply adjustment shows the opposite effect. For policy implications, the paper suggests that PT should be allowed to optimize its supply strategies under specific operation goal constraints, and AV operation should be regulated to relieve its externality on the system, including limiting the number of licenses, operation time, and service areas, which makes AV operate like a complementary mode to PT .", "edit_actions": [{"type": "R", "before": "emergence of", "after": "emerging", "start_char_pos": 4, "end_char_pos": 16}, {"type": "R", "before": "AVs) is anticipated to influence", "after": "AV) can either supplement", "start_char_pos": 39, "end_char_pos": 71}, {"type": "R", "before": ". Many possible relationships between AV and PT are proposed depending on the policy and institution, where competition and cooperation are two main categories.", "after": "or be a competitor with it.", "start_char_pos": 110, "end_char_pos": 270}, {"type": "R", "before": "the former", "after": "this competition", "start_char_pos": 293, "end_char_pos": 303}, {"type": "R", "before": "scenario-", "after": "scenario--", "start_char_pos": 322, "end_char_pos": 331}, {"type": "R", "before": "were only", "after": "are", "start_char_pos": 361, "end_char_pos": 370}, {"type": "R", "before": ".", "after": ",", "start_char_pos": 387, "end_char_pos": 388}, {"type": "R", "before": "We aim", "after": "and uses an ABM", "start_char_pos": 391, "end_char_pos": 397}, {"type": "R", "before": "(e.g. level of service, operators' financial viability, transport efficiency) when AV and PT are profit-oriented competitors with dynamic adjustable supply strategies under certain policy constraints. We assume AV can adjust the fleetsize and PT can adjust the headway. Service fare and bus routes are fixed. The competition process is analyzed through an agent-based simulation platform, which incorporates a proposed heuristic dynamic supply updating algorithm (HDSUA). The", "after": "in this competition from the perspectives of four stakeholders--AV operator, PT operator, passengers, and public authority. In our model, AV operator updates its supply by changing fleet sizes while PT by adjusting headways, and both use heuristic approaches to update supply in order to increase profits. We implement the model in the", "start_char_pos": 448, "end_char_pos": 923}, {"type": "R", "before": "Singapore Tampinesarea is selected as the case study, where only bus is considered for PT system. We found that when AV and bus operators are given the flexibility to adjust supply, both of them will re-distribute their supply spatially and temporally, leading", "after": "Tampines. In four regulation scenarios--two by two combinations regarding whether AV and PT are allowed to change supplies--we find that since AV can release the bus operator from low-demand routes, the competition can lead", "start_char_pos": 947, "end_char_pos": 1207}, {"type": "R", "before": ". In temporal dimension, both AV and bus will concentrate their supplies in morning and evening peak hours, and reduce the supplies in off-peak hours. The competition between AV and PT decreases passengers'", "after": "of both, and higher system efficiency, simultaneously, rather than a one-sided loss-gain result. For PT, after supply updates, spatially the services are concentrated to short feeder routes directly to the subway station, and temporally concentrated to peak hours. For passengers, the competition reduces their", "start_char_pos": 1226, "end_char_pos": 1432}, {"type": "R", "before": "increase their travel cost. The", "after": "increases their travel costs. Nonetheless, the", "start_char_pos": 1449, "end_char_pos": 1480}, {"type": "R", "before": "The", "after": "For system efficiency and sustainability,", "start_char_pos": 1555, "end_char_pos": 1558}, {"type": "R", "before": "passenger car equivalent (PCE), which is good for transport efficiency and sustainability. But", "after": "PCE, while", "start_char_pos": 1632, "end_char_pos": 1726}, {"type": "R", "before": "Overall, the competition does not necessarily bring out loss-gain results. A win-win outcome is also possible under certain policy interventions", "after": "For policy implications, the paper suggests that PT should be allowed to optimize its supply strategies under specific operation goal constraints, and AV operation should be regulated to relieve its externality on the system, including limiting the number of licenses, operation time, and service areas, which makes AV operate like a complementary mode to PT", "start_char_pos": 1779, "end_char_pos": 1923}], "sents_char_pos": [0, 111, 270, 648, 717, 756, 919, 1044, 1227, 1376, 1476, 1554, 1722, 1778, 1853]} {"doc_id": "2001.03513", "revision_depth": "1", "before_revision": "Women have always been underrepresented in movies and not until recently do women representation in movies improve . To investigate the improvement of women representation and its relationship with a movie's success, we propose a new measure, the female cast ratio, and compare it to the commonly used Bechdel test result. We employ generalized linear regression with L_1 penalty and a Random Forest model to identify the predictors that are influential on women representation, and evaluate the relationship between women representation and a movie's success in three aspects: revenue/budget ratio, rating and popularity. Three important findings in our study have highlighted the difficulties women in the film industry face in both upstream and downstream. First, female filmmakers especially female screenplay writers are instrumental for movies to have better women representation, but the percentage of female filmmakers has been very low. Second, lower budgets are often made to support movies that could tell good stories about women , and this usually cause the films to in turn receive more criticisms . Finally, the demand for better women presentation from moviegoers has also not been strong enough to compel the film industry for a change, as movies that have poor women representation can still be very popular and successful in the box office.", "after_revision": "Women have always been underrepresented in movies and not until recently has the representation of women in movies improved . To investigate the improvement of female representation and its relationship with a movie's success, we propose a new measure, the female cast ratio, and compare it to the commonly used Bechdel test result. We employ generalized linear regression with L_1 penalty and a Random Forest model to identify the predictors that influence female representation, and evaluate the relationship between female representation and a movie's success in three aspects: revenue/budget ratio, rating , and popularity. Three important findings in our study have highlighted the difficulties women in the film industry face both upstream and downstream. First, female filmmakers , especially female screenplay writers , are instrumental for movies to have better female representation, but the percentage of female filmmakers has been very low. Second, movies that have the potential to tell insightful stories about women are often provided with lower budgets , and this usually causes the films to in turn receive more criticism . Finally, the demand for better female representation from moviegoers has also not been strong enough to compel the film industry to change, as movies that have poor female representation can still be very popular and successful in the box office.", "edit_actions": [{"type": "R", "before": "do women representation in movies improve", "after": "has the representation of women in movies improved", "start_char_pos": 73, "end_char_pos": 114}, {"type": "R", "before": "women", "after": "female", "start_char_pos": 151, "end_char_pos": 156}, {"type": "R", "before": "are influential on women", "after": "influence female", "start_char_pos": 438, "end_char_pos": 462}, {"type": "R", "before": "women", "after": "female", "start_char_pos": 517, "end_char_pos": 522}, {"type": "A", "before": null, "after": ",", "start_char_pos": 607, "end_char_pos": 607}, {"type": "D", "before": "in", "after": null, "start_char_pos": 728, "end_char_pos": 730}, {"type": "A", "before": null, "after": ",", "start_char_pos": 786, "end_char_pos": 786}, {"type": "A", "before": null, "after": ",", "start_char_pos": 824, "end_char_pos": 824}, {"type": "R", "before": "women", "after": "female", "start_char_pos": 868, "end_char_pos": 873}, {"type": "R", "before": "lower budgets are often made to support movies that could tell good", "after": "movies that have the potential to tell insightful", "start_char_pos": 957, "end_char_pos": 1024}, {"type": "A", "before": null, "after": "are often provided with lower budgets", "start_char_pos": 1045, "end_char_pos": 1045}, {"type": "R", "before": "cause", "after": "causes", "start_char_pos": 1065, "end_char_pos": 1070}, {"type": "R", "before": "criticisms", "after": "criticism", "start_char_pos": 1105, "end_char_pos": 1115}, {"type": "R", "before": "women presentation", "after": "female representation", "start_char_pos": 1149, "end_char_pos": 1167}, {"type": "R", "before": "for a", "after": "to", "start_char_pos": 1244, "end_char_pos": 1249}, {"type": "R", "before": "women", "after": "female", "start_char_pos": 1283, "end_char_pos": 1288}], "sents_char_pos": [0, 116, 322, 623, 760, 948, 1117]} {"doc_id": "2001.07824", "revision_depth": "1", "before_revision": "Decision models can synthesize evidence from different sources to provide estimates of long-term consequences of a decision with uncertainty. Cohort state-transition models (cSTM) are decision models commonly used in medical decision making because they can simulate hypothetical cohorts' transitions across various health states over time. This tutorial shows how to conceptualize cSTMs in a programming languageenvironment and shows examples of their implementation in R. We illustrate their use in a cost-effectiveness analysis of a treatment using a previously published testbed cSTM. Both time-independent cSTM where transition probabilities are constant over time and time-dependent cSTM where transition probabilities vary over time are represented. For the time-dependent cSTM, we consider transition probabilities dependent on age and state residence . We also illustrate how this setup can facilitate the computation of epidemiological outcomes of interest, such as survival and prevalence. We conclude by demonstrating how to calculate economic outcomes and conducting a cost-effectiveness analysis of a treatment compared to usual care using the testbed model . We provide a link to a public repository with all the R code described in this tutorial that can be used to replicate the example or to be modified to suit different decision modeling needs .", "after_revision": "Decision models can synthesize evidence from different sources to simulate long-term consequences of different strategies in the presence of uncertainty. Cohort state-transition models (cSTM) are decision models commonly used in medical decision making to simulate hypothetical cohorts' transitions across various health states over time. This tutorial shows how to implement cSTMs in R, an open-source mathematical and statistical programming language. As an example, we use a previously published cSTM-based cost-effectiveness analysis. With this example, we illustrate both time-independent cSTMs, where transition probabilities are constant over time , and time-dependent cSTMs, where transition probabilities vary by age and are dependent on time spent in a health state (state residence) . We also illustrate how to compute various epidemiological outcomes of interest, such as survival and prevalence. We demonstrate how to calculate economic outcomes and conducting a cost-effectiveness analysis of multiple strategies using the example model, and provide additional resources to conduct probabilistic sensitivity analyses . We provide a link to a public repository with all the R code described in this tutorial that can be used to replicate the example or be adapted for various decision modeling applications .", "edit_actions": [{"type": "R", "before": "provide estimates of", "after": "simulate", "start_char_pos": 66, "end_char_pos": 86}, {"type": "R", "before": "a decision with", "after": "different strategies in the presence of", "start_char_pos": 113, "end_char_pos": 128}, {"type": "R", "before": "because they can", "after": "to", "start_char_pos": 241, "end_char_pos": 257}, {"type": "R", "before": "conceptualize cSTMs in a programming languageenvironment and shows examples of their implementation in R. We illustrate their use in a", "after": "implement cSTMs in R, an open-source mathematical and statistical programming language. As an example, we use a previously published cSTM-based", "start_char_pos": 368, "end_char_pos": 502}, {"type": "R", "before": "analysis of a treatment using a previously published testbed cSTM. Both", "after": "analysis. With this example, we illustrate both", "start_char_pos": 522, "end_char_pos": 593}, {"type": "R", "before": "cSTM", "after": "cSTMs,", "start_char_pos": 611, "end_char_pos": 615}, {"type": "A", "before": null, "after": ",", "start_char_pos": 670, "end_char_pos": 670}, {"type": "R", "before": "cSTM", "after": "cSTMs,", "start_char_pos": 690, "end_char_pos": 694}, {"type": "R", "before": "over time are represented. For the time-dependent cSTM, we consider transition probabilities dependent on age and state residence", "after": "by age and are dependent on time spent in a health state (state residence)", "start_char_pos": 731, "end_char_pos": 860}, {"type": "R", "before": "this setup can facilitate the computation of", "after": "to compute various", "start_char_pos": 886, "end_char_pos": 930}, {"type": "R", "before": "conclude by demonstrating", "after": "demonstrate", "start_char_pos": 1005, "end_char_pos": 1030}, {"type": "R", "before": "a treatment compared to usual care using the testbed model", "after": "multiple strategies using the example model, and provide additional resources to conduct probabilistic sensitivity analyses", "start_char_pos": 1114, "end_char_pos": 1172}, {"type": "R", "before": "to be modified to suit different decision modeling needs", "after": "be adapted for various decision modeling applications", "start_char_pos": 1308, "end_char_pos": 1364}], "sents_char_pos": [0, 141, 340, 473, 588, 757, 862, 1001, 1174]} {"doc_id": "2001.08950", "revision_depth": "1", "before_revision": "BERT has emerged as a popular model for natural language understanding. Given its compute intensive nature, even for inference, many recent studies have considered optimization of two important performance characteristics: model size and inference time. We consider classification tasks and propose a novel method, called PoWER-BERT, for improving the inference time for the BERT model without significant loss in the accuracy. The method works byeliminating word-vectors (intermediate vector outputs) from the encoder pipeline. We design a strategy for measuring the significanceof the word-vectors based on the self-attention mechanism of the encoders which helps us identify the word-vectors to be eliminated. Experimental evaluation on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with <1\\% loss in accuracy. We show that compared to the prior inference time reduction methods, PoWER-BERT offers better trade-off between accuracy and inference time . Lastly, we demonstrate that our scheme can also be used in conjunction with ALBERT (a highly compressed version of BERT) and can attain up to 6.8x factor reduction in inference time with <1\\% loss in accuracy .", "after_revision": "We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism ; c) learning how many word-vectors to eliminate by augmenting the BERT model and the loss function. Experiments on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with <1\\% loss in accuracy. We show that PoWER-BERT offers significantly better trade-off between accuracy and inference time compared to prior methods. We demonstrate that our method attains up to 6.8x reduction in inference time with <1\\% loss in accuracy when applied over ALBERT, a highly compressed version of BERT .", "edit_actions": [{"type": "R", "before": "BERT has emerged as a popular model for natural language understanding. Given its compute intensive nature, even for inference, many recent studies have considered optimization of two important performance characteristics: model size and inference time. We consider classification tasks and propose", "after": "We develop", "start_char_pos": 0, "end_char_pos": 298}, {"type": "R", "before": "for the BERT model without significant loss in", "after": "of the popular BERT model, while maintaining", "start_char_pos": 367, "end_char_pos": 413}, {"type": "R", "before": "The method works byeliminating", "after": "It works by: a) exploiting redundancy pertaining to", "start_char_pos": 428, "end_char_pos": 458}, {"type": "R", "before": "vector outputs) from the encoder pipeline. We design", "after": "encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing", "start_char_pos": 486, "end_char_pos": 538}, {"type": "R", "before": "the significanceof the word-vectors", "after": "their significance,", "start_char_pos": 564, "end_char_pos": 599}, {"type": "R", "before": "of the encoders which helps us identify the", "after": "; c) learning how many", "start_char_pos": 638, "end_char_pos": 681}, {"type": "R", "before": "be eliminated. Experimental evaluation", "after": "eliminate by augmenting the BERT model and the loss function. Experiments", "start_char_pos": 698, "end_char_pos": 736}, {"type": "D", "before": "compared to the prior inference time reduction methods,", "after": null, "start_char_pos": 889, "end_char_pos": 944}, {"type": "A", "before": null, "after": "significantly", "start_char_pos": 963, "end_char_pos": 963}, {"type": "R", "before": ". Lastly, we", "after": "compared to prior methods. We", "start_char_pos": 1017, "end_char_pos": 1029}, {"type": "R", "before": "scheme can also be used in conjunction with ALBERT (a highly compressed version of BERT) and can attain", "after": "method attains", "start_char_pos": 1051, "end_char_pos": 1154}, {"type": "D", "before": "factor", "after": null, "start_char_pos": 1166, "end_char_pos": 1172}, {"type": "A", "before": null, "after": "when applied over ALBERT, a highly compressed version of BERT", "start_char_pos": 1228, "end_char_pos": 1228}], "sents_char_pos": [0, 71, 253, 427, 528, 712, 875]} {"doc_id": "2001.09850", "revision_depth": "1", "before_revision": "We propose a novel time discretization for the log-normal SABR model dS_t = \\sigma_t S_t dW_t, d\\sigma_t = \\omega \\sigma_t dZ_t, with \\mbox{corr(W_t,Z_t)=\\varrho, } which is a variant of the Euler-Maruyama scheme , and study its asymptotic properties in the limit of a large number of time steps n%DIFDELCMD < \\to %%% \\infty at fixed \\beta =%DIFDELCMD < \\frac12%%% \\omega^2 n^2\\tau, \\rho = \\sigma_0\\sqrt{\\tau . We derive an almost sure limit and a large deviations result for the log-asset price in the n%DIFDELCMD < \\to %%% \\infty limit . The rate function of the large deviations result does not depend on the discretization time step \\tau. The implied volatility surface \\sigma_{\\rm BS for arbitrary maturity and strike in the limit \\omega^2 T%DIFDELCMD < \\to %%% 0 , \\sigma_0^2 T%DIFDELCMD < \\to %%% \\infty at fixed (\\omega^2 T)(\\sigma_0^2 T) is represented as an extremal problem . Using this representation we obtain analytical expansions of \\sigma_{\\rm BS for small maturity and extreme strikes .", "after_revision": "We propose a novel time discretization for the log-normal SABR model (W_t,Z_t)=\\varrho, } which is a popular stochastic volatility model that is widely used in financial practice. Our time discretization is a variant of the Euler-Maruyama scheme . We study its asymptotic properties in the limit of a large number of time steps %DIFDELCMD < \\to %%% %DIFDELCMD < \\frac12%%% under a certain asymptotic regime which includes the case of finite maturity, small vol-of-vol and large initial volatility with fixed product of vol-of-vol and initial volatility . We derive an almost sure limit and a large deviations result for the log-asset price in the %DIFDELCMD < \\to %%% limit of large number of time steps. We derive an exact representation of the implied volatility surface for arbitrary maturity and strike in %DIFDELCMD < \\to %%% %DIFDELCMD < \\to %%% this regime . Using this representation we obtain analytical expansions of the implied volatility for small maturity and extreme strikes , which reproduce at leading order known asymptotic results for the continuous time model .", "edit_actions": [{"type": "D", "before": "dS_t = \\sigma_t S_t dW_t, d\\sigma_t = \\omega \\sigma_t dZ_t, with \\mbox{corr", "after": null, "start_char_pos": 69, "end_char_pos": 144}, {"type": "A", "before": null, "after": "popular stochastic volatility model that is widely used in financial practice. Our time discretization is a", "start_char_pos": 176, "end_char_pos": 176}, {"type": "R", "before": ", and", "after": ". We", "start_char_pos": 214, "end_char_pos": 219}, {"type": "D", "before": "n", "after": null, "start_char_pos": 297, "end_char_pos": 298}, {"type": "D", "before": "\\infty at fixed \\beta =", "after": null, "start_char_pos": 319, "end_char_pos": 342}, {"type": "R", "before": "\\omega^2 n^2\\tau, \\rho = \\sigma_0\\sqrt{\\tau", "after": "under a certain asymptotic regime which includes the case of finite maturity, small vol-of-vol and large initial volatility with fixed product of vol-of-vol and initial volatility", "start_char_pos": 366, "end_char_pos": 409}, {"type": "D", "before": "n", "after": null, "start_char_pos": 504, "end_char_pos": 505}, {"type": "R", "before": "\\infty limit . The rate function of the large deviations result does not depend on the discretization time step \\tau. The", "after": "limit of large number of time steps. We derive an exact representation of the", "start_char_pos": 526, "end_char_pos": 647}, {"type": "D", "before": "\\sigma_{\\rm BS", "after": null, "start_char_pos": 675, "end_char_pos": 689}, {"type": "D", "before": "the limit \\omega^2 T", "after": null, "start_char_pos": 727, "end_char_pos": 747}, {"type": "D", "before": "0 , \\sigma_0^2 T", "after": null, "start_char_pos": 768, "end_char_pos": 784}, {"type": "R", "before": "\\infty at fixed (\\omega^2 T)(\\sigma_0^2 T) is represented as an extremal problem", "after": "this regime", "start_char_pos": 805, "end_char_pos": 885}, {"type": "R", "before": "\\sigma_{\\rm BS", "after": "the implied volatility", "start_char_pos": 949, "end_char_pos": 963}, {"type": "A", "before": null, "after": ", which reproduce at leading order known asymptotic results for the continuous time model", "start_char_pos": 1003, "end_char_pos": 1003}], "sents_char_pos": [0, 411, 540, 643, 887]} {"doc_id": "2001.10696", "revision_depth": "1", "before_revision": "Spiking Neural Network (SNN), as a brain-inspired approach, is attracting attentions due to its potential to produce ultra-high-energy-efficient hardware. Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a popular method to train unsupervised SNN. However, previous unsupervised SNNs trained through this method are limited to shallow networks with only one learnable layer and can't achieve satisfactory results when compared with multi-layer SNNs. In this paper, we ease this limitation by: 1)We propose Spiking Inception (Sp-Inception) module, inspired by the Inception module in Artificial Neural Network (ANN) literature. This module is trained through STDP- based competitive learning and outperforms baseline modules on learning capability, learning efficiency, and robustness ; 2)We propose Pooling-Reshape-Activate (PRA) layer to make Sp-Inception module stackable ; 3)We stack multiple Sp-Inception modules to construct multi-layer SNNs. Our method greatly exceeds baseline methods on image classification tasks and reaches state-of-the-art results on MNIST dataset among existing unsupervised SNNs.", "after_revision": "Spiking Neural Network (SNN), as a brain-inspired approach, is attracting attention due to its potential to produce ultra-high-energy-efficient hardware. Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a popular method to train an unsupervised SNN. However, previous unsupervised SNNs trained through this method are limited to a shallow network with only one learnable layer and cannot achieve satisfactory results when compared with multi-layer SNNs. In this paper, we eased this limitation by: 1)We proposed a Spiking Inception (Sp-Inception) module, inspired by the Inception module in the Artificial Neural Network (ANN) literature. This module is trained through STDP-based competitive learning and outperforms the baseline modules on learning capability, learning efficiency, and robustness . 2)We proposed a Pooling-Reshape-Activate (PRA) layer to make the Sp-Inception module stackable . 3)We stacked multiple Sp-Inception modules to construct multi-layer SNNs. Our algorithm outperforms the baseline algorithms on the hand-written digit classification task, and reaches state-of-the-art results on the MNIST dataset among the existing unsupervised SNNs.", "edit_actions": [{"type": "R", "before": "attentions", "after": "attention", "start_char_pos": 74, "end_char_pos": 84}, {"type": "A", "before": null, "after": "an", "start_char_pos": 255, "end_char_pos": 255}, {"type": "R", "before": "shallow networks", "after": "a shallow network", "start_char_pos": 353, "end_char_pos": 369}, {"type": "R", "before": "can't", "after": "cannot", "start_char_pos": 404, "end_char_pos": 409}, {"type": "R", "before": "ease", "after": "eased", "start_char_pos": 494, "end_char_pos": 498}, {"type": "R", "before": "propose", "after": "proposed a", "start_char_pos": 524, "end_char_pos": 531}, {"type": "A", "before": null, "after": "the", "start_char_pos": 609, "end_char_pos": 609}, {"type": "R", "before": "STDP- based", "after": "STDP-based", "start_char_pos": 685, "end_char_pos": 696}, {"type": "A", "before": null, "after": "the", "start_char_pos": 734, "end_char_pos": 734}, {"type": "R", "before": ";", "after": ".", "start_char_pos": 812, "end_char_pos": 813}, {"type": "R", "before": "propose", "after": "proposed a", "start_char_pos": 819, "end_char_pos": 826}, {"type": "A", "before": null, "after": "the", "start_char_pos": 872, "end_char_pos": 872}, {"type": "R", "before": ";", "after": ".", "start_char_pos": 903, "end_char_pos": 904}, {"type": "R", "before": "stack", "after": "stacked", "start_char_pos": 910, "end_char_pos": 915}, {"type": "R", "before": "method greatly exceeds baseline methods on image classification tasks", "after": "algorithm outperforms the baseline algorithms on the hand-written digit classification task,", "start_char_pos": 981, "end_char_pos": 1050}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1091, "end_char_pos": 1091}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1112, "end_char_pos": 1112}], "sents_char_pos": [0, 154, 653, 813, 904]} {"doc_id": "2001.11432", "revision_depth": "1", "before_revision": "The activity of neuronal networks can exhibit periods of bursting, the properties of which remain unclear. To study the bursting dynamics of a network , we develop a new stochastic model based on synaptic properties that also accounts for afterhyperpolarization , which shapes the end of a burst. A stochastic perturbation of this system leads to a succession of bursts and interbursts and we characterize their durations using a three dimensional phase-space analysis and numerical simulations. The phase-space contains three critical points (one attractor and two saddles) separated by a two-dimensional stable manifold \\Sigma. Bursting is defined by long deterministic excursions outside the basin of attraction, while the interburst duration is defined by the escape induced by random fluctuations. To characterize the distribution of the burst durations, we determine the distribution of exit points located on the two-dimensional separatrix \\Sigma . In addition, we compute analytically the mean burst and AHP durations using a linearization approximation of the dynamical system . Finally, we explore how various parameters such as the network connectivity and the afterhyperpolarization characteristics influence bursting and AHP dynamics . To conclude , this new model allows us to better characterize the role of various physiological parameters on bursting dynamics .", "after_revision": "In the absence of inhibition, excitatory neuronal networks can alternate between bursts and interburst intervals (IBI), with heterogeneous length distributions. As this dynamic remains unclear, especially the durations of each epoch , we develop here a bursting model based on synaptic depression and facilitation that also accounts for afterhyperpolarization (AHP), which is a key component of IBI. The framework is a novel stochastic three dimensional dynamical system perturbed by noise: numerical simulations can reproduce a succession of bursts and interbursts . Each phase corresponds to an exploration of a fraction of the phase-space , which contains three critical points (one attractor and two saddles) separated by a two-dimensional stable manifold \\Sigma. We show here that bursting is defined by long deterministic excursions away from the attractor, while IBI corresponds to escape induced by random fluctuations. We show that the variability in the burst durations, depends on the distribution of exit points located on \\Sigma that we compute using WKB and the method of characteristics . Finally, to better characterize the role of several parameters such as the network connectivity or the AHP time scale, we compute analytically the mean burst and AHP durations in a linear approximation . To conclude the distribution of bursting and IBI could result from synaptic dynamics modulated by AHP .", "edit_actions": [{"type": "R", "before": "The activity of", "after": "In the absence of inhibition, excitatory", "start_char_pos": 0, "end_char_pos": 15}, {"type": "R", "before": "exhibit periods of bursting, the properties of which remain unclear. To study the bursting dynamics of a network", "after": "alternate between bursts and interburst intervals (IBI), with heterogeneous length distributions. As this dynamic remains unclear, especially the durations of each epoch", "start_char_pos": 38, "end_char_pos": 150}, {"type": "R", "before": "a new stochastic", "after": "here a bursting", "start_char_pos": 164, "end_char_pos": 180}, {"type": "R", "before": "properties", "after": "depression and facilitation", "start_char_pos": 205, "end_char_pos": 215}, {"type": "R", "before": ", which shapes the end of a burst. A stochastic perturbation of this system leads to", "after": "(AHP), which is a key component of IBI. The framework is a novel stochastic three dimensional dynamical system perturbed by noise: numerical simulations can reproduce", "start_char_pos": 262, "end_char_pos": 346}, {"type": "R", "before": "and we characterize their durations using a three dimensional", "after": ". Each phase corresponds to an exploration of a fraction of the", "start_char_pos": 386, "end_char_pos": 447}, {"type": "R", "before": "analysis and numerical simulations. The phase-space", "after": ", which", "start_char_pos": 460, "end_char_pos": 511}, {"type": "R", "before": "Bursting", "after": "We show here that bursting", "start_char_pos": 630, "end_char_pos": 638}, {"type": "R", "before": "outside the basin of attraction, while the interburst duration is defined by the", "after": "away from the attractor, while IBI corresponds to", "start_char_pos": 683, "end_char_pos": 763}, {"type": "R", "before": "To characterize the distribution of", "after": "We show that the variability in", "start_char_pos": 803, "end_char_pos": 838}, {"type": "R", "before": "we determine", "after": "depends on", "start_char_pos": 860, "end_char_pos": 872}, {"type": "R", "before": "the two-dimensional separatrix \\Sigma . In addition, we compute analytically the mean burst and AHP durations using a linearization approximation of the dynamical system", "after": "\\Sigma that we compute using WKB and the method of characteristics", "start_char_pos": 916, "end_char_pos": 1085}, {"type": "R", "before": "we explore how various", "after": "to better characterize the role of several", "start_char_pos": 1097, "end_char_pos": 1119}, {"type": "R", "before": "and the afterhyperpolarization characteristics influence bursting and AHP dynamics", "after": "or the AHP time scale, we compute analytically the mean burst and AHP durations in a linear approximation", "start_char_pos": 1164, "end_char_pos": 1246}, {"type": "R", "before": ", this new model allows us to better characterize the role of various physiological parameters on bursting dynamics", "after": "the distribution of bursting and IBI could result from synaptic dynamics modulated by AHP", "start_char_pos": 1261, "end_char_pos": 1376}], "sents_char_pos": [0, 106, 296, 495, 629, 802, 955, 1087, 1248]} {"doc_id": "2002.04242", "revision_depth": "1", "before_revision": "Due to world dynamics and hardware uncertainty, robots inevitably fail in task executions, leading to undesired or even dangerous executions. To avoid failures for improved robot performance, it is critical to identify and correct robot abnormal executions in an early stage. However, limited by reasoning capability and knowledge level , it is challenging for a robot to self diagnose and correct their abnormal behaviors. To solve this problem, a novel method is proposed, human-to-robot attention transfer ( H2R-AT) to seek help from a human . H2R-AT is developed based on \\textbf{ \\textit{\\textbf{ a novel stacked neural networks model, transferring human attention embedded in verbal reminders to robot attention embedded in robot visual perceiving . With the attention transfer from a human , a robot understands what and where \\textit{ \\textit{ human concerns are to identify and correct its abnormal executions. To validate the effectiveness of H2R-AT, two representative task scenarios , \" serve water for a human in a kitchen\" and \" pick up a defective gear in a factory\" with abnormal robot executions, were designed in an open-access simulation platform V-REP; 252 volunteers were recruited to provide about 12000 verbal reminders to learn and test the attention transfer model H2R-AT. With an \\textit{\\textbf{ accuracy of 73.68\\\\%DIF < in transferring attention and accuracy of 66.86\\\\% in avoiding robot execution failures, the effectiveness of H2R-AT was validated.\\end{abstract} %DIF > in transferring attention, and the high accuracy of 66.86\\\\% in avoiding grasping failures.", "after_revision": "Due to real-world dynamics and hardware uncertainty, robots inevitably fail in task executions, resulting in undesired or even dangerous executions. In order to avoid failures and improve robot performance, it is critical to identify and correct abnormal robot executions at an early stage. However, due to limited reasoning capability and knowledge storage , it is challenging for robots to self-diagnose and -correct their own abnormality in both planning and executing. To improve robot self diagnosis capability, in this research a novel human-to-robot attention transfer ( \\textbf{H2R-AT ) method was developed to identify robot manipulation errors by leveraging human instructions.\\textit{\\textbf{H2R-AT was developed by fusing attention mapping mechanism into a novel stacked neural networks model, transferring human verbal attention into robot visual attention . With the attention transfer , a robot understands \\textit{what and\\textit{where human concerns are to identify and correct abnormal manipulations. Two representative task scenarios : `` serve water for a human in a kitchen\" and `` pick up a defective gear in a factory\" were designed in a simulation framework CRAIhri with abnormal robot manipulations; and 252 volunteers were recruited to provide about 12000 verbal reminders to learn and test \\textit{\\textbf{H2R-AT . The method effectiveness was validated by the high accuracy of 73.68\\\\%DIF < in transferring attention and accuracy of 66.86\\\\% in avoiding robot execution failures, the effectiveness of H2R-AT was validated.\\end{abstract} %DIF > in transferring attention, and the high accuracy of 66.86\\\\% in avoiding grasping failures.", "edit_actions": [{"type": "R", "before": "world", "after": "real-world", "start_char_pos": 7, "end_char_pos": 12}, {"type": "R", "before": "leading to", "after": "resulting in", "start_char_pos": 91, "end_char_pos": 101}, {"type": "R", "before": "To avoid failures for improved", "after": "In order to avoid failures and improve", "start_char_pos": 142, "end_char_pos": 172}, {"type": "R", "before": "robot abnormal executions in", "after": "abnormal robot executions at", "start_char_pos": 231, "end_char_pos": 259}, {"type": "R", "before": "limited by", "after": "due to limited", "start_char_pos": 285, "end_char_pos": 295}, {"type": "R", "before": "level", "after": "storage", "start_char_pos": 331, "end_char_pos": 336}, {"type": "R", "before": "a robot to self diagnose and correct their abnormal behaviors. To solve this problem, a novel method is proposed,", "after": "robots to self-diagnose and -correct their own abnormality in both planning and executing. To improve robot self diagnosis capability, in this research a novel", "start_char_pos": 361, "end_char_pos": 474}, {"type": "D", "before": "H2R-AT) to seek help from a human . H2R-AT is developed based on", "after": null, "start_char_pos": 511, "end_char_pos": 575}, {"type": "A", "before": null, "after": "H2R-AT", "start_char_pos": 584, "end_char_pos": 584}, {"type": "A", "before": null, "after": ") method was developed to identify robot manipulation errors by leveraging human instructions.", "start_char_pos": 585, "end_char_pos": 585}, {"type": "A", "before": null, "after": "H2R-AT", "start_char_pos": 601, "end_char_pos": 601}, {"type": "A", "before": null, "after": "was developed by fusing attention mapping mechanism into", "start_char_pos": 602, "end_char_pos": 602}, {"type": "R", "before": "attention embedded in verbal reminders to robot attention embedded in robot visual perceiving", "after": "verbal attention into robot visual attention", "start_char_pos": 661, "end_char_pos": 754}, {"type": "D", "before": "from a human", "after": null, "start_char_pos": 785, "end_char_pos": 797}, {"type": "D", "before": "what and where", "after": null, "start_char_pos": 820, "end_char_pos": 834}, {"type": "A", "before": null, "after": "what", "start_char_pos": 843, "end_char_pos": 843}, {"type": "A", "before": null, "after": "and", "start_char_pos": 844, "end_char_pos": 844}, {"type": "A", "before": null, "after": "where", "start_char_pos": 852, "end_char_pos": 852}, {"type": "R", "before": "its abnormal executions. To validate the effectiveness of H2R-AT, two", "after": "abnormal manipulations. Two", "start_char_pos": 896, "end_char_pos": 965}, {"type": "R", "before": ", \"", "after": ": ``", "start_char_pos": 996, "end_char_pos": 999}, {"type": "R", "before": "\"", "after": "``", "start_char_pos": 1042, "end_char_pos": 1043}, {"type": "D", "before": "with abnormal robot executions,", "after": null, "start_char_pos": 1083, "end_char_pos": 1114}, {"type": "R", "before": "an open-access simulation platform V-REP;", "after": "a simulation framework CRAIhri with abnormal robot manipulations; and", "start_char_pos": 1132, "end_char_pos": 1173}, {"type": "D", "before": "the attention transfer model H2R-AT. With an", "after": null, "start_char_pos": 1262, "end_char_pos": 1306}, {"type": "A", "before": null, "after": "H2R-AT", "start_char_pos": 1323, "end_char_pos": 1323}, {"type": "A", "before": null, "after": ". The method effectiveness was validated by the high", "start_char_pos": 1324, "end_char_pos": 1324}], "sents_char_pos": [0, 141, 275, 423, 756, 920, 1173, 1298, 1482]} {"doc_id": "2002.04309", "revision_depth": "2", "before_revision": "Kinetic Ising models are powerful tools for studying the non-equilibrium dynamics of complex , discrete systemsand for analyzing their experimental recordings. However, the behaviour of the model is in general not tractable for large networks ; therefore, mean field theories are frequently used to approximate its statistical properties. Many variants of the classical naive and TAP (i.e., second-order) mean field approximations have been proposed , each of which makes unique assumptions about the temporal evolution of the system's correlation structure . This disparity of methods makes it difficult to systematically advance mean field approaches beyond previous contributions. Here, we propose a unified framework for mean field theories of the dynamics of asymmetric kinetic Ising systems based on information geometry . The framework is built on Plefka expansions of the model around a simplified model obtained by an orthogonal projection to a sub-manifold of tractable probability distributions. This view not only unifies previous methods but also allows us to propose novel methods that, in contrast with traditional mean-field approaches, preserve correlations of the system, both in stationary and transient states. By comparing analytical approximations and exact numerical simulations, we show that the proposed methods provide more accurate estimates of covariance evolution in the system than classical equations, and consequently outperform previous mean field theories for solving the inverse Ising problem. In sum, our framework unifies and extends current mean-field approximations of kinetic Ising model, constituting a powerful tool for studying non-equilibrium dynamics of complex systems .", "after_revision": "Kinetic Ising models are powerful tools for studying the non-equilibrium dynamics of complex systems. As their behavior is not tractable for large networks , many mean-field methods have been proposed for their analysis, each based on unique assumptions about the system's temporal evolution . This disparity of approaches makes it challenging to systematically advance mean-field methods beyond previous contributions. Here, we propose a unifying framework for mean-field theories of asymmetric kinetic Ising systems from an information geometry perspective . The framework is built on Plefka expansions of a system around a simplified model obtained by an orthogonal projection to a sub-manifold of tractable probability distributions. This view not only unifies previous methods but also allows us to develop novel methods that, in contrast with traditional approaches, preserve the system's correlations. We show that these new methods can outperform previous ones in predicting and assessing network properties near maximally fluctuating regimes .", "edit_actions": [{"type": "R", "before": ", discrete systemsand for analyzing their experimental recordings. However, the behaviour of the model is in general", "after": "systems. As their behavior is", "start_char_pos": 93, "end_char_pos": 209}, {"type": "R", "before": "; therefore, mean field theories are frequently used to approximate its statistical properties. Many variants of the classical naive and TAP (i.e., second-order) mean field approximations", "after": ", many mean-field methods", "start_char_pos": 243, "end_char_pos": 430}, {"type": "R", "before": ", each of which makes", "after": "for their analysis, each based on", "start_char_pos": 450, "end_char_pos": 471}, {"type": "D", "before": "temporal evolution of the", "after": null, "start_char_pos": 501, "end_char_pos": 526}, {"type": "R", "before": "correlation structure", "after": "temporal evolution", "start_char_pos": 536, "end_char_pos": 557}, {"type": "R", "before": "methods makes it difficult", "after": "approaches makes it challenging", "start_char_pos": 578, "end_char_pos": 604}, {"type": "R", "before": "mean field approaches", "after": "mean-field methods", "start_char_pos": 631, "end_char_pos": 652}, {"type": "R", "before": "unified framework for mean field theories of the dynamics of", "after": "unifying framework for mean-field theories of", "start_char_pos": 703, "end_char_pos": 763}, {"type": "R", "before": "based on information geometry", "after": "from an information geometry perspective", "start_char_pos": 797, "end_char_pos": 826}, {"type": "R", "before": "the model", "after": "a system", "start_char_pos": 876, "end_char_pos": 885}, {"type": "R", "before": "propose", "after": "develop", "start_char_pos": 1073, "end_char_pos": 1080}, {"type": "D", "before": "mean-field", "after": null, "start_char_pos": 1130, "end_char_pos": 1140}, {"type": "R", "before": "correlations of the system, both in stationary and transient states. By comparing analytical approximations and exact numerical simulations, we show that the proposed methods provide more accurate estimates of covariance evolution in the system than classical equations, and consequently outperform previous mean field theories for solving the inverse Ising problem. In sum, our framework unifies and extends current mean-field approximations of kinetic Ising model, constituting a powerful tool for studying non-equilibrium dynamics of complex systems", "after": "the system's correlations. We show that these new methods can outperform previous ones in predicting and assessing network properties near maximally fluctuating regimes", "start_char_pos": 1162, "end_char_pos": 1714}], "sents_char_pos": [0, 159, 244, 338, 559, 683, 828, 1006, 1230, 1528]} {"doc_id": "2002.04896", "revision_depth": "1", "before_revision": "The FFT of three dimensional (3D) input data is an important computational kernel of numerical simulations and is widely used in High Performance Computing (HPC) codes running on large number of processors. Although the efficient parallelization of 3D FFT has been largely investigated over the last few decades, performance and scalability of parallel 3D FFT methods on new generation hardware architecture for HPC is a major challenge. Looking at upcoming exascale cluster architectures, the conventional parallel 3D FFT calculations on HPC needs improvement for better performance . In this paper, we present CDACs three dimensional Fast Fourier Transform (CROFT) library which implements three dimensional parallel FFT using pencil decomposition. To exploit the multithreading capabilities of hardware without affecting performance, CROFT is designed to use hybrid programming model of OpenMP and MPI. CROFT implementation has a feature of overlapping compute and memory I /O with MPI communication . Depending on the number of processes used, CROFT shows performance improvement of about 51 to 42 percent as compared to FFTW3 library.", "after_revision": "The FFT of three-dimensional (3D) input data is an important computational kernel of numerical simulations and is widely used in High Performance Computing (HPC) codes running on a large number of processors. Performance of many scientific applications such as Molecular Dynamic simulations depends on the underlying 3D parallel FFT library being used . In this paper, we present C-DACs three-dimensional Fast Fourier Transform (CROFT) library which implements three-dimensional parallel FFT using pencil decomposition. To exploit the hyperthreading capabilities of processor cores without affecting performance, CROFT is designed to use multithreading along with MPI. CROFT implementation has an innovative feature of overlapping compute and memory-I /O with MPI communication using multithreading. As opposed to other 3D FFT implementations, CROFT uses only two threads where one thread is dedicated for communication so that it can be effectively overlapped with computations. Thus, depending on the number of processes used, CROFT achieves performance improvement of about 51 \\% to 42 \\% as compared to FFTW3 library.", "edit_actions": [{"type": "R", "before": "three dimensional", "after": "three-dimensional", "start_char_pos": 11, "end_char_pos": 28}, {"type": "A", "before": null, "after": "a", "start_char_pos": 179, "end_char_pos": 179}, {"type": "R", "before": "Although the efficient parallelization of", "after": "Performance of many scientific applications such as Molecular Dynamic simulations depends on the underlying", "start_char_pos": 208, "end_char_pos": 249}, {"type": "R", "before": "FFT has been largely investigated over the last few decades, performance and scalability of parallel 3D FFT methods on new generation hardware architecture for HPC is a major challenge. Looking at upcoming exascale cluster architectures, the conventional parallel 3D FFT calculations on HPC needs improvement for better performance", "after": "parallel FFT library being used", "start_char_pos": 253, "end_char_pos": 584}, {"type": "R", "before": "CDACs three dimensional", "after": "C-DACs three-dimensional", "start_char_pos": 613, "end_char_pos": 636}, {"type": "R", "before": "three dimensional", "after": "three-dimensional", "start_char_pos": 693, "end_char_pos": 710}, {"type": "R", "before": "multithreading capabilities of hardware", "after": "hyperthreading capabilities of processor cores", "start_char_pos": 767, "end_char_pos": 806}, {"type": "R", "before": "hybrid programming model of OpenMP and", "after": "multithreading along with", "start_char_pos": 863, "end_char_pos": 901}, {"type": "R", "before": "a", "after": "an innovative", "start_char_pos": 932, "end_char_pos": 933}, {"type": "R", "before": "memory I", "after": "memory-I", "start_char_pos": 969, "end_char_pos": 977}, {"type": "R", "before": ". Depending", "after": "using multithreading. As opposed to other 3D FFT implementations, CROFT uses only two threads where one thread is dedicated for communication so that it can be effectively overlapped with computations. Thus, depending", "start_char_pos": 1004, "end_char_pos": 1015}, {"type": "R", "before": "shows", "after": "achieves", "start_char_pos": 1055, "end_char_pos": 1060}, {"type": "A", "before": null, "after": "\\%", "start_char_pos": 1097, "end_char_pos": 1097}, {"type": "R", "before": "percent", "after": "\\%", "start_char_pos": 1104, "end_char_pos": 1111}], "sents_char_pos": [0, 207, 438, 586, 751, 1005]} {"doc_id": "2002.05619", "revision_depth": "2", "before_revision": "Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of time and energy consumption . Moreover, learning can happen rapidly after very few examples. This is due to the optimality of coding and learning schemes, which have yet to be clearly understood. This naturally challenges the formulation of biologically inspired RSNNs in order to improve our understanding of biological intelligence and the efficiency of artificial ones . Several spiking network models have been proposed but it remains a challenge to design RSNNs that use biologically plausible mechanisms capable of solving complex temporal tasks. We use a general probabilistic framework that relies on the principle of maximizing the likelihood for the network to reproduce some temporal dynamics. This principle permits to analytically work out an explicit and biologically plausible plasticity rule. Here we propose a novel target-based learning scheme in which such a rule can be used to efficiently train a RSNN to solve several temporal taskssuch as learning multidimensional trajectory and an implementation of the temporal XOR. We finally show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model is general and it can be applied to a wide variety of network architectures and types of biological neurons. The derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction to be experimentally verified .", "after_revision": "Recurrent spiking neural networks (RSNN) in the human brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs to improve our understanding of brain computation and the efficiency of artificial intelligence . Several spiking models and learning rules have been proposed , but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spiking pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation .", "edit_actions": [{"type": "A", "before": null, "after": "human", "start_char_pos": 48, "end_char_pos": 48}, {"type": "R", "before": "time and energy consumption . Moreover, learning can happen rapidly after", "after": "energy consumption and requires", "start_char_pos": 155, "end_char_pos": 228}, {"type": "R", "before": "is due to the optimality of coding and learning schemes, which have yet to be clearly understood. This naturally challenges the formulation of biologically inspired RSNNs in order", "after": "motivates the search for biologically inspired learning rules for RSNNs", "start_char_pos": 253, "end_char_pos": 432}, {"type": "R", "before": "biological intelligence", "after": "brain computation", "start_char_pos": 465, "end_char_pos": 488}, {"type": "R", "before": "ones", "after": "intelligence", "start_char_pos": 522, "end_char_pos": 526}, {"type": "R", "before": "network models", "after": "models and learning rules", "start_char_pos": 545, "end_char_pos": 559}, {"type": "A", "before": null, "after": ",", "start_char_pos": 579, "end_char_pos": 579}, {"type": "R", "before": "that use", "after": "whose learning relies on", "start_char_pos": 623, "end_char_pos": 631}, {"type": "A", "before": null, "after": "and are", "start_char_pos": 666, "end_char_pos": 666}, {"type": "R", "before": "We use a general probabilistic framework that relies on the principle of maximizing", "after": "In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of", "start_char_pos": 710, "end_char_pos": 793}, {"type": "R", "before": "reproduce some temporal dynamics. This principle permits to analytically work out an explicit and biologically plausible plasticity rule. Here we", "after": "solve a specific task. We", "start_char_pos": 828, "end_char_pos": 973}, {"type": "R", "before": "such a rule can be used to efficiently train a RSNN to solve several temporal taskssuch as learning multidimensional trajectory and an implementation of the temporal XOR. We finally", "after": "the learning rule derived from likelihood maximization is used to mimic a specific spiking pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we", "start_char_pos": 1028, "end_char_pos": 1209}, {"type": "D", "before": "is general and it", "after": null, "start_char_pos": 1410, "end_char_pos": 1427}, {"type": "R", "before": "a wide variety of network architectures and", "after": "different", "start_char_pos": 1446, "end_char_pos": 1489}, {"type": "A", "before": null, "after": "analytically", "start_char_pos": 1523, "end_char_pos": 1523}, {"type": "R", "before": "to be experimentally verified", "after": "for experimental validation", "start_char_pos": 1631, "end_char_pos": 1660}], "sents_char_pos": [0, 184, 247, 350, 709, 861, 965, 1198, 1399, 1518]} {"doc_id": "2002.07152", "revision_depth": "1", "before_revision": "An \\alpha-additive spanner of an undirected graph G =(V, E) is a subgraph H such that the distance between any two vertices in G is stretched by no more than an additive factor of \\alpha. It is previously known that unweighted graphshave 2-, 4-, and 6-additive spanners containing (n^{3/2), }%DIFDELCMD < O%%% (n^{7/5), and O(n^{4/3}) edges, respectively. In this paper, we generalize these results to weighted graphs. We consider \\alpha=} 2W , 4W , 6W, where W is the maximum edge weight in G. We first show that every n-node graph has a subsetwise 2W-spanner on O(n |S|^{1/2) edges where S \\subseteq V and W is a constant. We then show that for every set P with |P| = p vertex demand pairs, there are pairwise 2W- and 4W-spanners on O(np^{1/3}) and O(np^{2/7}) edges respectively. We also show that for every set P, there is a 6W-spanner on O(np^{1/4}) edges where W is a constant. We then show that every graph has additive 2W- and 4W-spanners on O(n^{3/2}) and O(n^{7/5}) edges respectively. Finally, we show that every graph has an additive 6W-spanner on O(n^{4/3}) edges where W is a constant} .", "after_revision": "Aspanner of a graph G is a subgraph H ), }%DIFDELCMD < O%%% ), and O(n^{4/3}) edges, respectively. In this paper, we generalize these results to weighted graphs. We consider \\alpha=} that approximately preserves its pairwise distances. Spanners are commonly applied to compress computation on metric spaces corresponding to weighted input graphs. Classic spanner constructions can seamlessly handle edge weights, so long as error is measuredmultiplicatively. In this work, we investigate whether one can similarly extend constructions of spanners withadditive error. These extensions are not immediate, due to a key lemma about the size of shortest path neighborhoods that fails for weighted graphs. Despite this, we recover a suitable amortized version, which lets us prove direct extensions of classic +2 and +4 unweighted spanners (both all-pairs and pairwise) to + 2W and + 4W weighted spanners, where W is the maximum edge weight ) edges where S \\subseteq V and W is a constant. We then show that for every set P with |P| = p vertex demand pairs, there are pairwise 2W- and 4W-spanners on O(np^{1/3}) and O(np^{2/7}) edges respectively. We also show that for every set P, there is a 6W-spanner on O(np^{1/4}) edges where W is a constant. We then show that every graph has additive 2W- and 4W-spanners on O(n^{3/2}) and O(n^{7/5}) edges respectively. Finally, we show that every graph has an additive 6W-spanner on O(n^{4/3}) edges where W is a constant} . For a technical reason, the +6 unweighted spanner becomes a +8W weighted spanner; closing this error gap is an interesting remaining open problem .", "edit_actions": [{"type": "R", "before": "An \\alpha-additive spanner of an undirected graph G =(V, E)", "after": "A", "start_char_pos": 0, "end_char_pos": 59}, {"type": "A", "before": null, "after": "spanner", "start_char_pos": 59, "end_char_pos": 59}, {"type": "A", "before": null, "after": "of a graph G", "start_char_pos": 60, "end_char_pos": 60}, {"type": "D", "before": "such that the distance between any two vertices in G is stretched by no more than an additive factor of \\alpha. It is previously known that unweighted graphshave 2-, 4-, and 6-additive spanners containing", "after": null, "start_char_pos": 77, "end_char_pos": 281}, {"type": "D", "before": "(n^{3/2", "after": null, "start_char_pos": 282, "end_char_pos": 289}, {"type": "D", "before": "(n^{7/5", "after": null, "start_char_pos": 311, "end_char_pos": 318}, {"type": "A", "before": null, "after": "that approximately preserves its pairwise distances. Spanners are commonly applied to compress computation on metric spaces corresponding to weighted input graphs. Classic spanner constructions can seamlessly handle edge weights, so long as error is measured", "start_char_pos": 441, "end_char_pos": 441}, {"type": "A", "before": null, "after": "multiplicatively", "start_char_pos": 441, "end_char_pos": 441}, {"type": "A", "before": null, "after": ". In this work, we investigate whether one can similarly extend constructions of spanners with", "start_char_pos": 441, "end_char_pos": 441}, {"type": "A", "before": null, "after": "additive", "start_char_pos": 441, "end_char_pos": 441}, {"type": "A", "before": null, "after": "error. These extensions are not immediate, due to a key lemma about the size of shortest path neighborhoods that fails for weighted graphs. Despite this, we recover a suitable amortized version, which lets us prove direct extensions of classic +2 and +4 unweighted spanners (both all-pairs and pairwise) to +", "start_char_pos": 442, "end_char_pos": 442}, {"type": "R", "before": ",", "after": "and +", "start_char_pos": 446, "end_char_pos": 447}, {"type": "R", "before": ", 6W,", "after": "weighted spanners,", "start_char_pos": 451, "end_char_pos": 456}, {"type": "D", "before": "in G. We first show that every n-node graph has a subsetwise 2W-spanner on O(n |S|^{1/2", "after": null, "start_char_pos": 492, "end_char_pos": 579}, {"type": "A", "before": null, "after": ". For a technical reason, the +6 unweighted spanner becomes a +8W weighted spanner; closing this error gap is an interesting remaining open problem", "start_char_pos": 1103, "end_char_pos": 1103}], "sents_char_pos": [0, 188, 356, 419, 627, 785, 886, 998]} {"doc_id": "2002.07561", "revision_depth": "1", "before_revision": "In electricity markets futures deliver the underlying over a period and thus function as a swap contract . In this paper we introduce a market price of risk for delivery periods of electricity swaps . In particular, we suggest a weighted geometric average of an artificial geometric electricity futures price over the corresponding delivery period. This leads to a geometric electricity swap price dynamics without any approximation requirements . Our framework allows to include typical features as the Samuelson effect, seasonalities as well as a stochastic volatilityin the absence of arbitrage. We show that our suggested model is suitable for pricing options on electricity swaps using the Heston method. Especially, we illustrate the related pricing procedure for electricity swaps and options in the setting of Arismendi et al. (2016), Schneider and Tavin (2018) and Fanelli and Schmeck (2019) .", "after_revision": "In electricity markets , futures contracts typically function as a swap since they deliver the underlying over a period of time . In this paper , we introduce a market price for the delivery periods of electricity swaps , thereby opening an arbitrage-free pricing framework for derivatives based on these contracts. Furthermore, we use a weighted geometric averaging of an artificial geometric futures price over the corresponding delivery period. Without any need for approximations, this averaging results in geometric swap price dynamics . Our framework allows for including typical features as the Samuelson effect, seasonalities , and stochastic volatility. In particular, we investigate the pricing procedures for electricity swaps and options in line with Arismendi et al. (2016), Schneider and Tavin (2018) , and Fanelli and Schmeck (2019) . A numerical study highlights the differences between these models depending on the delivery period .", "edit_actions": [{"type": "R", "before": "futures", "after": ", futures contracts typically function as a swap since they", "start_char_pos": 23, "end_char_pos": 30}, {"type": "R", "before": "and thus function as a swap contract", "after": "of time", "start_char_pos": 68, "end_char_pos": 104}, {"type": "A", "before": null, "after": ",", "start_char_pos": 121, "end_char_pos": 121}, {"type": "R", "before": "of risk for", "after": "for the", "start_char_pos": 150, "end_char_pos": 161}, {"type": "R", "before": ". In particular, we suggest", "after": ", thereby opening an arbitrage-free pricing framework for derivatives based on these contracts. Furthermore, we use", "start_char_pos": 200, "end_char_pos": 227}, {"type": "R", "before": "average", "after": "averaging", "start_char_pos": 249, "end_char_pos": 256}, {"type": "D", "before": "electricity", "after": null, "start_char_pos": 284, "end_char_pos": 295}, {"type": "R", "before": "This leads to a geometric electricity", "after": "Without any need for approximations, this averaging results in geometric", "start_char_pos": 350, "end_char_pos": 387}, {"type": "D", "before": "without any approximation requirements", "after": null, "start_char_pos": 408, "end_char_pos": 446}, {"type": "R", "before": "to include", "after": "for including", "start_char_pos": 470, "end_char_pos": 480}, {"type": "R", "before": "as well as a stochastic volatilityin the absence of arbitrage. We show that our suggested model is suitable for pricing options on electricity swaps using the Heston method. Especially, we illustrate the related pricing procedure", "after": ", and stochastic volatility. In particular, we investigate the pricing procedures", "start_char_pos": 537, "end_char_pos": 766}, {"type": "R", "before": "the setting of", "after": "line with", "start_char_pos": 804, "end_char_pos": 818}, {"type": "A", "before": null, "after": ",", "start_char_pos": 871, "end_char_pos": 871}, {"type": "A", "before": null, "after": ". A numerical study highlights the differences between these models depending on the delivery period", "start_char_pos": 903, "end_char_pos": 903}], "sents_char_pos": [0, 106, 201, 349, 448, 599, 710]} {"doc_id": "2002.07840", "revision_depth": "1", "before_revision": "A unit disk graph G on a given set of points P in the plane is a geometric graph where an edge exists between two points p,q \\in P if and only if |pq| \\leq 1. A subgraph G' of G is a k-hop spanner if and only if for every edge pq\\in G, the topological shortest path between p,q in G' has at most k edges. We obtain the following results for unit disk graphs. ( i ) Every n-vertex unit disk graph has a 5-hop spanner with at most 5.5n edges. We analyze the family of spanners constructed by Biniaz ( WADS 2019 ) and improve the upper bound on the number of edges from 9n to 5.5n. ( ii ) Using a new construction, we show that every n-vertex unit disk graph has a 3-hop spanner with at most 11n edges. ( iii ) Every n-vertex unit disk graph has a 2-hop spanner with O(n ^{3/2 ) edges. This is the first construction of a 2-hop spanner with a subquadratic number of edges. (iv ) For every sufficiently large n, there exists a set P of n points such that every plane hop spanner on P has hop stretch factor at least 4. Previously, no lower bound greater than 2 was known. ( v ) For every point set on a circle, there exists a plane 4-hop spanner. As such, this provides a tight bound for points on a circle. ( vi ) The maximum degree of k-hop spanners cannot be bounded above by a function of k.", "after_revision": "A unit disk graph G on a given set of points P in the plane is a geometric graph where an edge exists between two points p,q \\in P if and only if |pq| \\leq 1. A subgraph G' of G is a k-hop spanner if and only if for every edge pq\\in G, the topological shortest path between p,q in G' has at most k edges. We obtain the following results for unit disk graphs. ( I ) Every n-vertex unit disk graph has a 5-hop spanner with at most 5.5n edges. We analyze the family of spanners constructed by Biniaz ( 2020 ) and improve the upper bound on the number of edges from 9n to 5.5n. ( II ) Using a new construction, we show that every n-vertex unit disk graph has a 3-hop spanner with at most 11n edges. ( III ) Every n-vertex unit disk graph has a 2-hop spanner with O(n \\log n ) edges. This is the first nontrivial construction of 2-hop spanners. (IV ) For every sufficiently large n, there exists a set P of n points on a circle, such that every plane hop spanner on P has hop stretch factor at least 4. Previously, no lower bound greater than 2 was known. ( V ) For every point set on a circle, there exists a plane 4-hop spanner. As such, this provides a tight bound for points on a circle. ( VI ) The maximum degree of k-hop spanners cannot be bounded from above by a function of ~ k.", "edit_actions": [{"type": "R", "before": "k-hop spanner", "after": "k-hop spanner", "start_char_pos": 183, "end_char_pos": 196}, {"type": "R", "before": "i", "after": "I", "start_char_pos": 361, "end_char_pos": 362}, {"type": "R", "before": "WADS 2019", "after": "2020", "start_char_pos": 499, "end_char_pos": 508}, {"type": "R", "before": "ii", "after": "II", "start_char_pos": 581, "end_char_pos": 583}, {"type": "R", "before": "iii", "after": "III", "start_char_pos": 702, "end_char_pos": 705}, {"type": "R", "before": "^{3/2", "after": "\\log n", "start_char_pos": 768, "end_char_pos": 773}, {"type": "R", "before": "construction of a", "after": "nontrivial construction of", "start_char_pos": 801, "end_char_pos": 818}, {"type": "R", "before": "spanner with a subquadratic number of edges. (iv", "after": "spanners. (IV", "start_char_pos": 825, "end_char_pos": 873}, {"type": "A", "before": null, "after": "on a circle,", "start_char_pos": 941, "end_char_pos": 941}, {"type": "R", "before": "v", "after": "V", "start_char_pos": 1071, "end_char_pos": 1072}, {"type": "R", "before": "vi", "after": "VI", "start_char_pos": 1207, "end_char_pos": 1209}, {"type": "A", "before": null, "after": "from", "start_char_pos": 1267, "end_char_pos": 1267}, {"type": "A", "before": null, "after": "~", "start_char_pos": 1291, "end_char_pos": 1291}], "sents_char_pos": [0, 304, 358, 440, 699, 782, 869, 1068, 1143, 1204]} {"doc_id": "2002.08972", "revision_depth": "1", "before_revision": "Head mounted displaysbring eye tracking into daily use and this raises privacy concerns for users . Privacy-preservation techniques such as differential privacy mechanisms are recently applied to the eye tracking data obtained from such displays ; however, standard differential privacy mechanisms are vulnerable to temporal correlations in the eye movement features. In this work, a transform coding based differential privacy mechanism is proposed for the first time in the eye tracking literature to further adapt it to statistics of eye movement feature data by comparing various low-complexity methods. Fourier Perturbation Algorithm, which is a differential privacy mechanism, is extended and a scaling mistake in its proof is corrected. Significant reductions in correlations in addition to query sensitivities are illustrated , which provide the best utility-privacy trade-off in the literature for the eye tracking dataset used. The differentially private eye movement data are evaluated also for classification accuracies for gender and document-type predictions to show that higher privacy is obtained without a reduction in the classification accuracies by using proposed methods .", "after_revision": "New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking and are expected to enable novel ways of human-computer interaction in many applications. However, since eye movement properties contain biometric information, privacy concerns have to be handled properly . Privacy-preservation techniques such as differential privacy mechanisms have recently been applied to the eye movement data obtained from such displays . Standard differential privacy mechanisms ; however, are vulnerable to temporal correlations in the eye movement features. In this work, we propose a novel transform-coding based differential privacy mechanism to further adapt it to the statistics of eye movement feature data by comparing various low-complexity methods. We extent Fourier Perturbation Algorithm, which is a differential privacy mechanism, and correct a scaling mistake in its proof . Furthermore, we illustrate significant reductions in sample correlations in addition to query sensitivities , which provide the best utility-privacy trade-off in the eye tracking literature. Our results show significantly high privacy without loss in classification accuracies as well .", "edit_actions": [{"type": "R", "before": "Head mounted displaysbring eye tracking into daily use and this raises privacy concerns for users", "after": "New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking and are expected to enable novel ways of human-computer interaction in many applications. However, since eye movement properties contain biometric information, privacy concerns have to be handled properly", "start_char_pos": 0, "end_char_pos": 97}, {"type": "R", "before": "are recently", "after": "have recently been", "start_char_pos": 172, "end_char_pos": 184}, {"type": "R", "before": "tracking", "after": "movement", "start_char_pos": 204, "end_char_pos": 212}, {"type": "R", "before": "; however, standard", "after": ". Standard", "start_char_pos": 246, "end_char_pos": 265}, {"type": "A", "before": null, "after": "; however,", "start_char_pos": 298, "end_char_pos": 298}, {"type": "R", "before": "a transform coding", "after": "we propose a novel transform-coding", "start_char_pos": 383, "end_char_pos": 401}, {"type": "D", "before": "is proposed for the first time in the eye tracking literature", "after": null, "start_char_pos": 439, "end_char_pos": 500}, {"type": "A", "before": null, "after": "the", "start_char_pos": 524, "end_char_pos": 524}, {"type": "A", "before": null, "after": "We extent", "start_char_pos": 610, "end_char_pos": 610}, {"type": "R", "before": "is extended and", "after": "and correct", "start_char_pos": 686, "end_char_pos": 701}, {"type": "R", "before": "is corrected. Significant reductions in", "after": ". Furthermore, we illustrate significant reductions in sample", "start_char_pos": 733, "end_char_pos": 772}, {"type": "D", "before": "are illustrated", "after": null, "start_char_pos": 821, "end_char_pos": 836}, {"type": "R", "before": "literature for the eye tracking dataset used. The differentially private eye movement data are evaluated also for classification accuracies for gender and document-type predictions to show that higher privacy is obtained without a reduction in the classification accuracies by using proposed methods", "after": "eye tracking literature. Our results show significantly high privacy without loss in classification accuracies as well", "start_char_pos": 895, "end_char_pos": 1194}], "sents_char_pos": [0, 247, 368, 609, 746, 940]} {"doc_id": "2002.09062", "revision_depth": "1", "before_revision": "The inference of chemical reaction networks is an important task in understanding the chemical processes in life sciences and environment . Yet, only a few reaction systems are well-understood due to a large number of important reaction pathways involved but still unknown. Revealing unknown reaction pathways is an important task for scientific discovery that takes decades and requires lots of expert knowledge. This work presents a neural network approach for discovering unknown reaction pathways from concentration time series data. The neural network denoted as Chemical Reaction Neural Network (CRNN), is designed to be equivalent to chemical reaction networks by following the fundamental physics laws of the Law of Mass Action and Arrhenius Law. The CRNN is physically interpretable , and its weights correspond to the reaction pathways and rate constants of the chemical reaction network. Then, inferencing the reaction pathways and the rate constants are accomplished by training the equivalent CRNN via stochastic gradient descent. The approach precludes the need for expert knowledge in proposing candidate reactions, such that the inference is autonomous and applicable to new systemsfor which there is no existing empirical knowledge to propose reaction pathways . The physical interpretability also makes the CRNN not only capable of fitting the data for a given system but also developing knowledge of unknown pathways that could be generalized to similar chemical systems . Finally, the approach is applied to several chemical systems in chemical engineering and biochemistry to demonstrate its robustness and generality .", "after_revision": "Chemical reactions occur in energy, environmental, biological, and many other natural systems, and the inference of the reaction networks is essential to understand and design the chemical processes in engineering and life sciences . Yet, revealing the reaction pathways for complex systems and processes is still challenging due to the lack of knowledge of the involved species and reactions. Here, we present a neural network approach that autonomously discovers reaction pathways from the time-resolved species concentration data. The proposed Chemical Reaction Neural Network (CRNN), by design, satisfies the fundamental physics laws , including the Law of Mass Action and the Arrhenius Law. Consequently, the CRNN is physically interpretable such that the reaction pathways can be interpreted, and the kinetic parameters can be quantified simultaneously from the weights of the neural network. The inference of the chemical pathways is accomplished by training the CRNN with species concentration data via stochastic gradient descent. We demonstrate the successful implementations and the robustness of the approach in elucidating the chemical reaction pathways of several chemical engineering and biochemical systems. The autonomous inference by the CRNN approach precludes the need for expert knowledge in proposing candidate networks and addresses the curse of dimensionality in complex systems . The physical interpretability also makes the CRNN capable of not only fitting the data for a given system but also developing knowledge of unknown pathways that could be generalized to similar chemical systems .", "edit_actions": [{"type": "R", "before": "The inference of chemical", "after": "Chemical reactions occur in energy, environmental, biological, and many other natural systems, and the inference of the", "start_char_pos": 0, "end_char_pos": 25}, {"type": "R", "before": "an important task in understanding", "after": "essential to understand and design", "start_char_pos": 47, "end_char_pos": 81}, {"type": "R", "before": "life sciences and environment", "after": "engineering and life sciences", "start_char_pos": 108, "end_char_pos": 137}, {"type": "R", "before": "only a few reaction systems are well-understood due to a large number of important reaction pathways involved but still unknown. Revealing unknown reaction pathways is an important task for scientific discovery that takes decades and requires lots of expert knowledge. This work presents", "after": "revealing the reaction pathways for complex systems and processes is still challenging due to the lack of knowledge of the involved species and reactions. Here, we present", "start_char_pos": 145, "end_char_pos": 432}, {"type": "R", "before": "for discovering unknown", "after": "that autonomously discovers", "start_char_pos": 459, "end_char_pos": 482}, {"type": "R", "before": "concentration time series", "after": "the time-resolved species concentration", "start_char_pos": 506, "end_char_pos": 531}, {"type": "R", "before": "neural network denoted as", "after": "proposed", "start_char_pos": 542, "end_char_pos": 567}, {"type": "R", "before": "is designed to be equivalent to chemical reaction networks by following", "after": "by design, satisfies", "start_char_pos": 609, "end_char_pos": 680}, {"type": "R", "before": "of", "after": ", including", "start_char_pos": 710, "end_char_pos": 712}, {"type": "A", "before": null, "after": "the", "start_char_pos": 740, "end_char_pos": 740}, {"type": "R", "before": "The", "after": "Consequently, the", "start_char_pos": 756, "end_char_pos": 759}, {"type": "R", "before": ", and its weights correspond to", "after": "such that", "start_char_pos": 793, "end_char_pos": 824}, {"type": "R", "before": "and rate constants of the chemical reaction network. Then, inferencing the reaction pathways and the rate constants are", "after": "can be interpreted, and the kinetic parameters can be quantified simultaneously from the weights of the neural network. The inference of the chemical pathways is", "start_char_pos": 847, "end_char_pos": 966}, {"type": "R", "before": "equivalent CRNN", "after": "CRNN with species concentration data", "start_char_pos": 996, "end_char_pos": 1011}, {"type": "R", "before": "The", "after": "We demonstrate the successful implementations and the robustness of the approach in elucidating the chemical reaction pathways of several chemical engineering and biochemical systems. The autonomous inference by the CRNN", "start_char_pos": 1045, "end_char_pos": 1048}, {"type": "R", "before": "reactions, such that the inference is autonomous and applicable to new systemsfor which there is no existing empirical knowledge to propose reaction pathways", "after": "networks and addresses the curse of dimensionality in complex systems", "start_char_pos": 1121, "end_char_pos": 1278}, {"type": "R", "before": "not only capable of", "after": "capable of not only", "start_char_pos": 1331, "end_char_pos": 1350}, {"type": "D", "before": ". Finally, the approach is applied to several chemical systems in chemical engineering and biochemistry to demonstrate its robustness and generality", "after": null, "start_char_pos": 1491, "end_char_pos": 1639}], "sents_char_pos": [0, 139, 273, 413, 537, 755, 899, 1044, 1280, 1492]} {"doc_id": "2002.09949", "revision_depth": "2", "before_revision": "The Semantic Web is made of structured sources of information, such as DBpedia or GeoNames, also known as Linked Data(LD). They can be linked and queried together. The information they contain is atomized into triples , each triple being a simple statement composed of a subject, a predicate and an object. Triples can be combined to form higher level statements following information needs. But reconstituting chains of triples has a cognitive cost, and this makes it difficult for data producers to have meaningful overviews of the content of their own datasets. We report a characterisation of LD producers' needs, and introduce the concept of path-based summaries, which carries a higher level of semantics, to meet their needs . We present the tool Path Outlinesto support LD producers in browsing path-based summaries of their datasets. We describe its interface based on the broken (out)lines layout algorithm and the path browser visualisation . We compare Path Outlines with the current baseline technique (Virtuoso SPARQL query editor) in an experiment with 36 participants. We show that Path Outlines is faster, leads to better task completion and less errors, and that participants prefer it, find it easier and more comfortable to use .", "after_revision": "Knowledge Graphs have become a ubiquitous technology powering search engines, recommender systems, connected objects, corporate knowledge management and Open Data. They rely on small units of information named triples that can be combined to form higher level statements across datasets following information needs. But data producers face a problem: reconstituting chains of triples has a high cognitive cost, which hinders them from gaining meaningful overviews of their own datasets. We introduce path outlines: conceptual objects characterizing sequences of triples with descriptive statistics. We interview 11 data producers to evaluate their interest . We present Path Outlines, a tool to browse path-based summaries , based on coordinated views with 2 novel visualisations . We compare Path Outlines with the current baseline technique in an experiment with 36 participants. We show that it is 3 times faster, leads to better task completion , less errors, that participants prefer it, and find tasks easier with it .", "edit_actions": [{"type": "R", "before": "The Semantic Web is made of structured sources of information, such as DBpedia or GeoNames, also known as Linked Data(LD). They can be linked and queried together. The information they contain is atomized into triples , each triple being a simple statement composed of a subject, a predicate and an object. Triples", "after": "Knowledge Graphs have become a ubiquitous technology powering search engines, recommender systems, connected objects, corporate knowledge management and Open Data. They rely on small units of information named triples that", "start_char_pos": 0, "end_char_pos": 314}, {"type": "A", "before": null, "after": "across datasets", "start_char_pos": 363, "end_char_pos": 363}, {"type": "A", "before": null, "after": "data producers face a problem:", "start_char_pos": 397, "end_char_pos": 397}, {"type": "A", "before": null, "after": "high", "start_char_pos": 437, "end_char_pos": 437}, {"type": "R", "before": "and this makes it difficult for data producers to have", "after": "which hinders them from gaining", "start_char_pos": 454, "end_char_pos": 508}, {"type": "D", "before": "the content of", "after": null, "start_char_pos": 533, "end_char_pos": 547}, {"type": "R", "before": "report a characterisation of LD producers' needs, and introduce the concept of path-based summaries, which carries a higher level of semantics, to meet their needs", "after": "introduce path outlines: conceptual objects characterizing sequences of triples with descriptive statistics. We interview 11 data producers to evaluate their interest", "start_char_pos": 571, "end_char_pos": 734}, {"type": "R", "before": "the tool Path Outlinesto support LD producers in browsing", "after": "Path Outlines, a tool to browse", "start_char_pos": 748, "end_char_pos": 805}, {"type": "R", "before": "of their datasets. We describe its interface based on the broken (out)lines layout algorithm and the path browser visualisation", "after": ", based on coordinated views with 2 novel visualisations", "start_char_pos": 827, "end_char_pos": 954}, {"type": "D", "before": "(Virtuoso SPARQL query editor)", "after": null, "start_char_pos": 1018, "end_char_pos": 1048}, {"type": "R", "before": "Path Outlines is", "after": "it is 3 times", "start_char_pos": 1101, "end_char_pos": 1117}, {"type": "R", "before": "and", "after": ",", "start_char_pos": 1158, "end_char_pos": 1161}, {"type": "D", "before": "and", "after": null, "start_char_pos": 1175, "end_char_pos": 1178}, {"type": "R", "before": "find it easier and more comfortable to use", "after": "and find tasks easier with it", "start_char_pos": 1208, "end_char_pos": 1250}], "sents_char_pos": [0, 122, 163, 306, 392, 567, 736, 845, 956, 1087]} {"doc_id": "2002.10047", "revision_depth": "1", "before_revision": "Dense subgraphs capture strong communities in social networks and entities possessing strong interactions in biological networks. In particular, k-clique counting and listing have applications in identifying important actors in a graph. However, finding k-cliques is computationally expensive, and thus it is important to have fast parallel algorithms. We present a new parallel algorithm for k-clique counting that has polylogarithmic span and is work-efficient with respect to the well-known sequential algorithmfor k-clique listing by Chiba and Nishizeki . Our algorithm can be extended to support listing and enumeration, and is based on computing low out-degree orientations . We present a new linear-work and polylogarithmic span algorithm for computing such orientations, and new parallel algorithms for producing unbiased estimations of clique counts . Finally, we design new parallel work-efficient algorithms for approximating the k-clique densest subgraph . Our first algorithm gives a 1/k-approximation and is based on iteratively peeling vertices with the lowest clique counts; our algorithm is work-efficient, but we prove that this process is P-complete and hence does not have polylogarithmic span. Our second algorithm gives a 1/(k(1+\\epsilon))-approximation , is work-efficient, and has polylogarithmic span. In addition, we implement these algorithms and propose optimizations . On a 60-core machine , we achieve 13.23 -38.99 x and 1.19 -13.76 x self-relative parallel speedup for k-clique counting and k-clique densest subgraph, respectively. Compared to the state-of-the-art parallel k-clique counting algorithms, we achieve a 1.31-9.88 x speedup, and compared to existing implementations of k-clique densest subgraph, we achieve a 1.01-11.83 x speedup. We are able to compute the 4-clique counts on the largest publicly-available graph with over two hundred billion edges .", "after_revision": " We present a new parallel algorithm for k-clique counting /listing that has polylogarithmic span (parallel time) and is work-efficient (matches the work of the best sequential algorithm) for sparse graphs . Our algorithm is based on computing low out-degree orientations , which we present new linear-work and polylogarithmic-span algorithms for computing in parallel. We also present new parallel algorithms for producing unbiased estimations of clique counts using graph sparsification . Finally, we design two new parallel work-efficient algorithms for approximating the k-clique densest subgraph , the first of which is a 1/k-approximation and the second of which is a 1/(k(1+\\epsilon))-approximation and has polylogarithmic span. Our first algorithm does not have polylogarithmic span, but we prove that it solves a P-complete problem. In addition to the theoretical results, we also implement the algorithms and propose various optimizations to improve their practical performance . On a 30-core machine with two-way hyper-threading, our algorithms achieve 13.23 --38.99 x and 1.19 --13.76 x self-relative parallel speedup for k-clique counting and k-clique densest subgraph, respectively. Compared to the state-of-the-art parallel k-clique counting algorithms, we achieve up to 9.88 x speedup, and compared to existing implementations of k-clique densest subgraph, we achieve up to 11.83 x speedup. We are able to compute the 4-clique counts on the largest publicly-available graph with over two hundred billion edges for the first time .", "edit_actions": [{"type": "D", "before": "Dense subgraphs capture strong communities in social networks and entities possessing strong interactions in biological networks. In particular, k-clique counting and listing have applications in identifying important actors in a graph. However, finding k-cliques is computationally expensive, and thus it is important to have fast parallel algorithms.", "after": null, "start_char_pos": 0, "end_char_pos": 352}, {"type": "A", "before": null, "after": "/listing", "start_char_pos": 411, "end_char_pos": 411}, {"type": "A", "before": null, "after": "(parallel time)", "start_char_pos": 442, "end_char_pos": 442}, {"type": "R", "before": "with respect to the well-known sequential algorithmfor k-clique listing by Chiba and Nishizeki", "after": "(matches the work of the best sequential algorithm) for sparse graphs", "start_char_pos": 465, "end_char_pos": 559}, {"type": "D", "before": "can be extended to support listing and enumeration, and", "after": null, "start_char_pos": 576, "end_char_pos": 631}, {"type": "R", "before": ". We present a", "after": ", which we present", "start_char_pos": 682, "end_char_pos": 696}, {"type": "R", "before": "polylogarithmic span algorithm for computing such orientations, and", "after": "polylogarithmic-span algorithms for computing in parallel. We also present", "start_char_pos": 717, "end_char_pos": 784}, {"type": "A", "before": null, "after": "using graph sparsification", "start_char_pos": 861, "end_char_pos": 861}, {"type": "A", "before": null, "after": "two", "start_char_pos": 883, "end_char_pos": 883}, {"type": "R", "before": ". Our first algorithm gives", "after": ", the first of which is", "start_char_pos": 971, "end_char_pos": 998}, {"type": "R", "before": "is based on iteratively peeling vertices with the lowest clique counts; our algorithm is work-efficient, but we prove that this process is P-complete and hence does not have polylogarithmic span. Our second algorithm gives", "after": "the second of which is", "start_char_pos": 1023, "end_char_pos": 1245}, {"type": "D", "before": ", is work-efficient,", "after": null, "start_char_pos": 1280, "end_char_pos": 1300}, {"type": "R", "before": "In addition, we implement these", "after": "Our first algorithm does not have polylogarithmic span, but we prove that it solves a P-complete problem. In addition to the theoretical results, we also implement the", "start_char_pos": 1331, "end_char_pos": 1362}, {"type": "R", "before": "optimizations", "after": "various optimizations to improve their practical performance", "start_char_pos": 1386, "end_char_pos": 1399}, {"type": "R", "before": "60-core machine , we", "after": "30-core machine with two-way hyper-threading, our algorithms", "start_char_pos": 1407, "end_char_pos": 1427}, {"type": "R", "before": "-38.99", "after": "--38.99", "start_char_pos": 1442, "end_char_pos": 1448}, {"type": "R", "before": "-13.76", "after": "--13.76", "start_char_pos": 1460, "end_char_pos": 1466}, {"type": "R", "before": "a 1.31-9.88", "after": "up to 9.88", "start_char_pos": 1650, "end_char_pos": 1661}, {"type": "R", "before": "a 1.01-11.83", "after": "up to 11.83", "start_char_pos": 1755, "end_char_pos": 1767}, {"type": "A", "before": null, "after": "for the first time", "start_char_pos": 1898, "end_char_pos": 1898}], "sents_char_pos": [0, 129, 236, 352, 561, 683, 863, 972, 1094, 1218, 1330, 1401, 1566, 1778]} {"doc_id": "2002.11193", "revision_depth": "1", "before_revision": "Spatio-temporal information is increasingly used for driving a plethora of intelligent transportation, smart-city, and crowd-sensing applications. At the same time, different types of data marketplaces are proposed for de-siloing and monetising individual and enterprise data . In this paper we study the problem of estimating the relative value of spatio-temporal data sold in wholesale and retail data marketplaces for the purpose of forecasting future demand in a certain area, e. g. a city. Using as case studies large datasets of taxi rides from Chicago and New York, we ask questions such as \"When does it make sense for different taxi companies to combine their data?\" and \"How should different companies be compensated for the data that they share?\". We then turn our attention to the even harder problem of establishing the value of the data brought to retail marketplaces by individual drivers. Overall, we show that simplistic approaches, such as assuming that the value of the dataheld by companies or drivers is proportional to its volumeare inaccurate, because they fail to consider the complex complementarities that may exist among different datasets. To remedy this , more complex notions of value-sharing from economics and game-theory, such as the Shapley value need to be used to capture the effect of mixing datasets on the accuracy of forecasting algorithms driven by them . Applying the Shapley value to large datasets from many sources is computationally challenging. We use structured sampling to overcome such scalability challenges and manage to compute accurately the importance of different data sources, even when their number ranges in the thousands, as in the case of all the taxi drivers in a large metropolis .", "after_revision": "Spatio-temporal information is used for driving a plethora of intelligent transportation, smart-city, and crowd-sensing applications. Since data is now considered a valuable production factor, data marketplaces have appeared to help individuals and enterprises bring it to market to satisfy the ever-growing demand. In such marketplaces, several sources may need to combine their data in order to meet the requirements of different applications . In this paper we study the problem of estimating the relative value of different spatio-temporal datasets combined in wholesale and retail marketplaces for the purpose of predicting demand in metropolitan areas. Using as case studies large datasets of taxi rides from Chicago and New York, we ask questions such as \"When does it make sense for different taxi companies to combine their data?\" , and \"How should different companies be compensated for the data that they share?\". We then turn our attention to the even harder problem of establishing the relative value of the data brought to retail marketplaces by individual drivers. Overall, we show that simplistic but popular approaches for estimating the relative value of data, such as using volume , or the ``leave-one-out'' heuristic, are inaccurate. Instead, more complex notions of value from economics and game-theory, such as the Shapley value need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms . Applying the Shapley value to large datasets from many sources is , of course, computationally challenging. We resort to structured sampling and manage to compute accurately the importance of thousands of data sources. We show that the relative value of the data held by different taxi companies and drivers may differ substantially, and that its relative ranking may change from district to district within a metropolitan area .", "edit_actions": [{"type": "D", "before": "increasingly", "after": null, "start_char_pos": 31, "end_char_pos": 43}, {"type": "R", "before": "At the same time, different types of data marketplaces are proposed for de-siloing and monetising individual and enterprise data", "after": "Since data is now considered a valuable production factor, data marketplaces have appeared to help individuals and enterprises bring it to market to satisfy the ever-growing demand. In such marketplaces, several sources may need to combine their data in order to meet the requirements of different applications", "start_char_pos": 147, "end_char_pos": 275}, {"type": "A", "before": null, "after": "different", "start_char_pos": 349, "end_char_pos": 349}, {"type": "R", "before": "data sold", "after": "datasets combined", "start_char_pos": 366, "end_char_pos": 375}, {"type": "D", "before": "data", "after": null, "start_char_pos": 400, "end_char_pos": 404}, {"type": "R", "before": "forecasting future demand in a certain area, e. g. a city.", "after": "predicting demand in metropolitan areas.", "start_char_pos": 437, "end_char_pos": 495}, {"type": "A", "before": null, "after": ",", "start_char_pos": 677, "end_char_pos": 677}, {"type": "A", "before": null, "after": "relative", "start_char_pos": 835, "end_char_pos": 835}, {"type": "R", "before": "approaches, such as assuming that the value of the dataheld by companies or drivers is proportional to its volumeare inaccurate, because they fail to consider the complex complementarities that may exist among different datasets. To remedy this", "after": "but popular approaches for estimating the relative value of data, such as using volume", "start_char_pos": 941, "end_char_pos": 1185}, {"type": "A", "before": null, "after": "or the ``leave-one-out'' heuristic, are inaccurate. Instead,", "start_char_pos": 1188, "end_char_pos": 1188}, {"type": "R", "before": "value-sharing", "after": "value", "start_char_pos": 1213, "end_char_pos": 1226}, {"type": "R", "before": "used", "after": "employed if one wishes", "start_char_pos": 1296, "end_char_pos": 1300}, {"type": "R", "before": "effect of mixing", "after": "complex effects of mixing different", "start_char_pos": 1316, "end_char_pos": 1332}, {"type": "D", "before": "driven by them", "after": null, "start_char_pos": 1384, "end_char_pos": 1398}, {"type": "A", "before": null, "after": ", of course,", "start_char_pos": 1467, "end_char_pos": 1467}, {"type": "R", "before": "use structured sampling to overcome such scalability challenges", "after": "resort to structured sampling", "start_char_pos": 1500, "end_char_pos": 1563}, {"type": "R", "before": "different data sources, even when their number ranges in the thousands, as in the case of all the taxi drivers in a large metropolis", "after": "thousands of data sources. We show that the relative value of the data held by different taxi companies and drivers may differ substantially, and that its relative ranking may change from district to district within a metropolitan area", "start_char_pos": 1615, "end_char_pos": 1747}], "sents_char_pos": [0, 146, 277, 495, 760, 907, 1170, 1400, 1496]} {"doc_id": "2002.11583", "revision_depth": "1", "before_revision": "Holston, Laubach and Williams' (2017) estimates of the natural rate of interest are driven by the downward trending behaviour of ` other factor' z_{t}. I show that their implementation of Stock and Watson's (1998) Median Unbiased Estimation (MUE) to determine the size of \\lambda_{z is unsound. It cannot recover the ratio of interest \\lambda _{z}=a_{r}\\sigma _{z}/\\sigma _{y} from MUE required for the estimation of the full structural model. This failure is due to their Stage 2 model being incorrectly specified . More importantly, the MUE procedure that they implement spuriously amplifies the estimate of \\lambda _{z}. Using a simulation experiment, I show that their MUE procedure generates excessively large estimates of \\lambda _{z} when applied to data simulated from a model where the true \\lambda _{z} is equal to zero. Correcting their Stage 2 MUE procedure leads to a substantially smaller estimate of \\lambda _{z a more subdued downward trending influence of ` other factor' z_{t} on the natural rate. This correction is quantitatively important. With everything else remaining the same in the model, the natural rate of interest is estimated to be 1.5\\% at the end of 2019:Q2; that is, three times the 0.5\\% estimate obtained from Holston et al.'s (2017) original Stage 2 MUE implementation . I also discuss various other issues that arise in their model of the natural rate that make it unsuitable for policy analysis.", "after_revision": "Holston, Laubach and Williams' (2017) estimates of the natural rate of interest are driven by the downward trending behaviour of ' other factor' z_{t}. I show that their implementation of Stock and Watson's (1998) Median Unbiased Estimation (MUE) to determine the size of the \\lambda _{z is unsound. It cannot recover the ratio of interest \\lambda _{z}=a_{r}\\sigma _{z}/\\sigma _{y} from MUE required for the estimation of the full structural model. This failure is due to an 'unnecessary' misspecification in Holston et al.'s (2017) formulation of the Stage 2 model . More importantly, their implementation of MUE on this misspecified Stage 2 model spuriously amplifies the point estimate of \\lambda _{z}. Using a simulation experiment, I show that their procedure generates excessively large estimates of \\lambda _{z} when applied to data generated from a model where the true \\lambda _{z} is equal to zero. Correcting the misspecification in their Stage 2 model and the implementation of MUE leads to a substantially smaller \\lambda _{z a more subdued downward trending influence of ' other factor' z_{t} on the natural rate. Moreover, the \\lambda _{z . I also discuss various other estimation issues that arise in Holston et al.'s (2017) model of the natural rate that make it unsuitable for policy analysis.", "edit_actions": [{"type": "R", "before": "`", "after": "'", "start_char_pos": 129, "end_char_pos": 130}, {"type": "R", "before": "\\lambda_{z", "after": "the \\lambda _{z", "start_char_pos": 272, "end_char_pos": 282}, {"type": "R", "before": "their", "after": "an 'unnecessary' misspecification in Holston et al.'s (2017) formulation of the", "start_char_pos": 467, "end_char_pos": 472}, {"type": "D", "before": "being incorrectly specified", "after": null, "start_char_pos": 487, "end_char_pos": 514}, {"type": "R", "before": "the MUE procedure that they implement", "after": "their implementation of MUE on this misspecified Stage 2 model", "start_char_pos": 535, "end_char_pos": 572}, {"type": "A", "before": null, "after": "point", "start_char_pos": 598, "end_char_pos": 598}, {"type": "D", "before": "MUE", "after": null, "start_char_pos": 674, "end_char_pos": 677}, {"type": "R", "before": "simulated", "after": "generated", "start_char_pos": 763, "end_char_pos": 772}, {"type": "A", "before": null, "after": "the misspecification in", "start_char_pos": 843, "end_char_pos": 843}, {"type": "R", "before": "MUE procedure", "after": "model and the implementation of MUE", "start_char_pos": 858, "end_char_pos": 871}, {"type": "R", "before": "estimate of \\lambda _{z", "after": "\\lambda _{z", "start_char_pos": 905, "end_char_pos": 928}, {"type": "R", "before": "`", "after": "'", "start_char_pos": 975, "end_char_pos": 976}, {"type": "R", "before": "This correction is quantitatively important. With everything else remaining the same in the model, the natural rate of interest is estimated to be 1.5\\% at the end of 2019:Q2; that is, three times the 0.5\\% estimate obtained from Holston et al.'s (2017) original Stage 2 MUE implementation", "after": "Moreover, the \\lambda _{z", "start_char_pos": 1018, "end_char_pos": 1307}, {"type": "A", "before": null, "after": "estimation", "start_char_pos": 1339, "end_char_pos": 1339}, {"type": "R", "before": "their", "after": "Holston et al.'s (2017)", "start_char_pos": 1361, "end_char_pos": 1366}], "sents_char_pos": [0, 151, 294, 443, 516, 624, 831, 1017, 1062, 1193, 1309]} {"doc_id": "2002.12104", "revision_depth": "1", "before_revision": "In the presence of large dimensional datasets that contain many irrelevant features(variables), dimensionality reduction algorithms have proven to be useful in removing features with low variance and combine features with high correlation . In this paper, we propose a new feature selection method which uses singular value decomposition of a matrix and }] the method of least squares to remove the irrelevant features and detect correlations between the remaining features . and remove those features whose weights are smaller than the threshold. To detect the correlations in the reduced matrix, which we still call A, we consider a perturbation }\\tilde =\\mid x -}\\mathbf{x}\\tilde{\\mathbf{x}} \\tilde \\tilde{\\mathbf{x}}. We cluster features first based on \\Delta\\mathbf{x} and then using the entropy of features. Finally, a feature is selected from each sub-cluster based on its weight and entropy. } The effectiveness of our method has been verified by performing a series of comparisons with state-of-the-art feature selection methods over ten genetic datasets ranging up from 9,117 to 267,604 features. The results show that our method is favorable in various aspects compared to state-of-the-art feature selection methods.\\e", "after_revision": "A central problem in machine learning and pattern recognition is the process of recognizing the most important features . In this paper, we provide a new feature selection method (DRPT) that consists of first removing the irrelevant features and then detecting correlations between the remaining features. Let D= A\\mid \\mathbf{b}] be a dataset, where \\mathbf{b the corresponding column (feature). We define a threshold based on the local maxima of \\mathbf{x and remove those features whose weights are smaller than the threshold. To detect the correlations in the reduced matrix, which we still call A, we consider a perturbation }\\tilde A of A. We prove that correlations are encoded in \\Delta\\mathbf{x=\\mid x -}\\mathbf{x}\\mid , where\\tilde{\\mathbf{x}} is the least quares solution of\\tilde A\\tilde{\\mathbf{x}}=\\mathbf{b. We cluster features first based on \\Delta\\mathbf{x} and then using the entropy of features. Finally, a feature is selected from each sub-cluster based on its weight and entropy. } The effectiveness of DRPT has been verified by performing a series of comparisons with seven state-of-the-art feature selection methods over ten genetic datasets ranging up from 9,117 to 267,604 features. The results show that , over all, the performance of DRPT is favorable in several aspects compared to each feature selection algorithm.\\e", "edit_actions": [{"type": "R", "before": "In the presence of large dimensional datasets that contain many irrelevant features(variables), dimensionality reduction algorithms have proven to be useful in removing features with low variance and combine features with high correlation", "after": "A central problem in machine learning and pattern recognition is the process of recognizing the most important features", "start_char_pos": 0, "end_char_pos": 238}, {"type": "R", "before": "propose", "after": "provide", "start_char_pos": 259, "end_char_pos": 266}, {"type": "R", "before": "which uses singular value decomposition of a matrix and", "after": "(DRPT) that consists of first removing the irrelevant features and then detecting correlations between the remaining features. Let D=", "start_char_pos": 298, "end_char_pos": 353}, {"type": "A", "before": null, "after": "A\\mid \\mathbf{b", "start_char_pos": 354, "end_char_pos": 354}, {"type": "A", "before": null, "after": "be a dataset, where \\mathbf{b", "start_char_pos": 357, "end_char_pos": 357}, {"type": "R", "before": "method of least squares to remove the irrelevant features and detect correlations between the remaining features .", "after": "corresponding column (feature). We define a threshold based on the local maxima of \\mathbf{x", "start_char_pos": 362, "end_char_pos": 476}, {"type": "A", "before": null, "after": "A of A. We prove that correlations are encoded in \\Delta\\mathbf{x", "start_char_pos": 657, "end_char_pos": 657}, {"type": "A", "before": null, "after": "\\mid , where", "start_char_pos": 677, "end_char_pos": 677}, {"type": "A", "before": null, "after": "is the least quares solution of", "start_char_pos": 696, "end_char_pos": 696}, {"type": "A", "before": null, "after": "A", "start_char_pos": 703, "end_char_pos": 703}, {"type": "A", "before": null, "after": "=\\mathbf{b", "start_char_pos": 721, "end_char_pos": 721}, {"type": "R", "before": "our method", "after": "DRPT", "start_char_pos": 924, "end_char_pos": 934}, {"type": "A", "before": null, "after": "seven", "start_char_pos": 996, "end_char_pos": 996}, {"type": "R", "before": "our method", "after": ", over all, the performance of DRPT", "start_char_pos": 1131, "end_char_pos": 1141}, {"type": "R", "before": "various", "after": "several", "start_char_pos": 1158, "end_char_pos": 1165}, {"type": "R", "before": "state-of-the-art feature selection methods.", "after": "each feature selection algorithm.", "start_char_pos": 1186, "end_char_pos": 1229}], "sents_char_pos": [0, 240, 548, 722, 814, 900, 1108]} {"doc_id": "2003.00381", "revision_depth": "1", "before_revision": "Cluster algorithms are gaining in popularity due to their compelling ability to identify discrete subgroups in data, and their increasing accessibility in mainstream programming languages and statistical software. While researchers can follow guidelines to choose the right algorithms, and to determine what constitutes convincing clustering , there are no firmly established ways of computing a priori statistical power for cluster analysis. Here, we take a simulation approach to estimate power and classification accuracy for popular analysis pipelines . We systematically varied cluster size, number of clusters, number of different features between clusters, effect sizewithin each different feature, and cluster covariance structurein generated datasets . We then subjected these datasets to common dimensionality reduction approaches (none, multi-dimensional scaling, or uniform manifold approximation and projection ) and cluster algorithms (k-means, hierarchical agglomerative clustering with Ward linkage and Euclidean distance, or average linkage and cosine distance, HDBSCAN). Furthermore, we simulated additional datasets to explore the effect of sample size and cluster separation on statistical power and classification accuracy . We found that clustering outcomes were driven by large effect sizes or the accumulation of many smaller effects across features, and were mostly unaffected by differences in covariance structure. Sufficient statistical power can be achieved with relatively small samples (N=20 per subgroup), provided cluster separation is large ({\\Delta}=4). Finally, we discuss whether fuzzy clustering (c-means) could provide a more parsimonious alternative for identifying separable multivariate normal distributions, particularly those with lower centroid separation } .", "after_revision": "Cluster algorithms are increasingly popular in biomedical research due to their compelling ability to identify discrete subgroups in data, and their increasing accessibility in mainstream software. While guidelines exist for algorithm selection and outcome evaluation , there are no firmly established ways of computing a priori statistical power for cluster analysis. Here, we estimated power and accuracy for common analysis pipelines through simulation. We varied subgroup size, number , separation (effect size), and covariance structure . We then subjected generated datasets to dimensionality reduction (none, multidimensional scaling, or UMAP ) and cluster algorithms (k-means, agglomerative hierarchical clustering with Ward or average linkage and Euclidean or cosine distance, HDBSCAN). Finally, we compared the statistical power of discrete (k-means), \"fuzzy\" (c-means), and finite mixture modelling approaches (which include latent profile and latent class analysis) . We found that outcomes were driven by large effect sizes or the accumulation of many smaller effects across features, and were unaffected by differences in covariance structure. Sufficient statistical power was achieved with relatively small samples (N=20 per subgroup), provided cluster separation is large ({\\Delta}=4). Fuzzy clustering provided a more parsimonious and powerful alternative for identifying separable multivariate normal distributions, particularly those with slightly lower centroid separation ( \\Delta}=3). Overall, we recommend that researchers 1) only apply cluster analysis when large subgroup separation is expected, 2) aim for sample sizes of N=20 to N=30 per expected subgroup, 3) use multidimensional scaling to improve cluster separation, and 4) use fuzzy clustering or finite mixture modelling approaches that are more powerful and more parsimonious with partially overlapping multivariate normal distributions .", "edit_actions": [{"type": "R", "before": "gaining in popularity", "after": "increasingly popular in biomedical research", "start_char_pos": 23, "end_char_pos": 44}, {"type": "D", "before": "programming languages and statistical", "after": null, "start_char_pos": 166, "end_char_pos": 203}, {"type": "R", "before": "researchers can follow guidelines to choose the right algorithms, and to determine what constitutes convincing clustering", "after": "guidelines exist for algorithm selection and outcome evaluation", "start_char_pos": 220, "end_char_pos": 341}, {"type": "R", "before": "take a simulation approach to estimate power and classification accuracy for popular analysis pipelines . We systematically varied cluster", "after": "estimated power and accuracy for common analysis pipelines through simulation. We varied subgroup", "start_char_pos": 452, "end_char_pos": 590}, {"type": "R", "before": "of clusters, number of different features between clusters, effect sizewithin each different feature, and cluster covariance structurein generated datasets", "after": ", separation (effect size), and covariance structure", "start_char_pos": 604, "end_char_pos": 759}, {"type": "R", "before": "these datasets to common dimensionality reduction approaches", "after": "generated datasets to dimensionality reduction", "start_char_pos": 780, "end_char_pos": 840}, {"type": "R", "before": "multi-dimensional", "after": "multidimensional", "start_char_pos": 848, "end_char_pos": 865}, {"type": "R", "before": "uniform manifold approximation and projection", "after": "UMAP", "start_char_pos": 878, "end_char_pos": 923}, {"type": "R", "before": "hierarchical agglomerative", "after": "agglomerative hierarchical", "start_char_pos": 959, "end_char_pos": 985}, {"type": "D", "before": "linkage and Euclidean distance,", "after": null, "start_char_pos": 1007, "end_char_pos": 1038}, {"type": "A", "before": null, "after": "Euclidean or", "start_char_pos": 1062, "end_char_pos": 1062}, {"type": "R", "before": "Furthermore, we simulated additional datasets to explore the effect of sample size and cluster separation on statistical power and classification accuracy", "after": "Finally, we compared the statistical power of discrete (k-means), \"fuzzy\" (c-means), and finite mixture modelling approaches (which include latent profile and latent class analysis)", "start_char_pos": 1090, "end_char_pos": 1244}, {"type": "D", "before": "clustering", "after": null, "start_char_pos": 1261, "end_char_pos": 1271}, {"type": "D", "before": "mostly", "after": null, "start_char_pos": 1385, "end_char_pos": 1391}, {"type": "R", "before": "can be", "after": "was", "start_char_pos": 1472, "end_char_pos": 1478}, {"type": "R", "before": "Finally, we discuss whether fuzzy clustering (c-means) could provide", "after": "Fuzzy clustering provided", "start_char_pos": 1590, "end_char_pos": 1658}, {"type": "A", "before": null, "after": "and powerful", "start_char_pos": 1679, "end_char_pos": 1679}, {"type": "A", "before": null, "after": "slightly", "start_char_pos": 1777, "end_char_pos": 1777}, {"type": "A", "before": null, "after": "(", "start_char_pos": 1804, "end_char_pos": 1804}, {"type": "A", "before": null, "after": "\\Delta", "start_char_pos": 1805, "end_char_pos": 1805}, {"type": "A", "before": null, "after": "=3). Overall, we recommend that researchers 1) only apply cluster analysis when large subgroup separation is expected, 2) aim for sample sizes of N=20 to N=30 per expected subgroup, 3) use multidimensional scaling to improve cluster separation, and 4) use fuzzy clustering or finite mixture modelling approaches that are more powerful and more parsimonious with partially overlapping multivariate normal distributions", "start_char_pos": 1806, "end_char_pos": 1806}], "sents_char_pos": [0, 213, 442, 557, 761, 1089, 1246, 1442, 1589]} {"doc_id": "2003.02200", "revision_depth": "1", "before_revision": "In geographic information systems, Digital Elevation Models (DEMs) are commonly processed using radial scanning based algorithms. These algorithms are particularly popular when calculating parameters whose magnitudes decrease with the distance squared such as those related to radio signals, sound waves , and human eyesight. However, radial scanning algorithms imply a large number of accesses to 2D arrays , which despite being regular, results in poor data locality . This paper proposes a new methodology , termed sDEM , which substantially improves the locality of memory accesses and largely increases the inherent parallelism involved in the computation of radial scanning algorithms. In particular, sDEM applies a data restructuring technique prior to accessing the memory and performing the computation. In order to demonstrate the high efficiency of sDEM, we use the problem of total viewshed computation as a case study . Sequential, parallel , single-GPU and multi-GPU implementations are analyzed and compared with the state-of-the-art total viewshed computation algorithm. Experiments show that sDEM achieves an acceleration rate of up to 827.3 times for the best multi-GPU execution approach with respect to the best multi-core implementation .", "after_revision": " Digital Elevation Models (DEMs) are important datasets for modelling the line of sight, such as radio signals, sound waves and human vision. These are commonly analyzed using rotational sweep algorithms. However, such algorithms require large numbers of memory accesses to 2D arrays which, despite being regular, result in poor data locality in memory. Here, we propose a new methodology called skewed Digital Elevation Model (sDEM) , which substantially improves the locality of memory accesses and increases the inherent parallelism involved in the computation of rotational sweep-based algorithms. In particular, sDEM applies a data restructuring technique before accessing the memory and performing the computation. To demonstrate the high efficiency of sDEM, we use the problem of total viewshed computation as a case study considering different implementations for single-core, multi-core , single-GPU and multi-GPU platforms. We conducted two experiments to compare sDEM with (i) the most commonly used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm. In the first experiment, sDEM is on average 8.8x faster than current GIS software despite being able to consider only few points because of their limitations. In the second experiment, sDEM is 827.3 x faster than the state-of-the-art algorithm in the best case .", "edit_actions": [{"type": "D", "before": "In geographic information systems,", "after": null, "start_char_pos": 0, "end_char_pos": 34}, {"type": "R", "before": "commonly processed using radial scanning based algorithms. These algorithms are particularly popular when calculating parameters whose magnitudes decrease with the distance squared such as those related to", "after": "important datasets for modelling the line of sight, such as", "start_char_pos": 71, "end_char_pos": 276}, {"type": "R", "before": ", and human eyesight. However, radial scanning algorithms imply a large number of", "after": "and human vision. These are commonly analyzed using rotational sweep algorithms. However, such algorithms require large numbers of memory", "start_char_pos": 304, "end_char_pos": 385}, {"type": "R", "before": ", which", "after": "which,", "start_char_pos": 408, "end_char_pos": 415}, {"type": "R", "before": "results", "after": "result", "start_char_pos": 439, "end_char_pos": 446}, {"type": "R", "before": ". This paper proposes", "after": "in memory. Here, we propose", "start_char_pos": 469, "end_char_pos": 490}, {"type": "R", "before": ", termed sDEM", "after": "called skewed Digital Elevation Model (sDEM)", "start_char_pos": 509, "end_char_pos": 522}, {"type": "D", "before": "largely", "after": null, "start_char_pos": 590, "end_char_pos": 597}, {"type": "R", "before": "radial scanning", "after": "rotational sweep-based", "start_char_pos": 664, "end_char_pos": 679}, {"type": "R", "before": "prior to", "after": "before", "start_char_pos": 751, "end_char_pos": 759}, {"type": "R", "before": "In order to", "after": "To", "start_char_pos": 813, "end_char_pos": 824}, {"type": "R", "before": ". Sequential, parallel", "after": "considering different implementations for single-core, multi-core", "start_char_pos": 931, "end_char_pos": 953}, {"type": "R", "before": "implementations are analyzed and compared with the", "after": "platforms. We conducted two experiments to compare sDEM with (i) the most commonly used geographic information systems (GIS) software and (ii) the", "start_char_pos": 981, "end_char_pos": 1031}, {"type": "R", "before": "total viewshed computation algorithm. Experiments show that sDEM achieves an acceleration rate of up to", "after": "algorithm. In the first experiment, sDEM is on average 8.8x faster than current GIS software despite being able to consider only few points because of their limitations. In the second experiment, sDEM is", "start_char_pos": 1049, "end_char_pos": 1152}, {"type": "R", "before": "times for the best multi-GPU execution approach with respect to the best multi-core implementation", "after": "x faster than the state-of-the-art algorithm in the best case", "start_char_pos": 1159, "end_char_pos": 1257}], "sents_char_pos": [0, 129, 325, 470, 691, 812, 932, 1086]} {"doc_id": "2003.03025", "revision_depth": "2", "before_revision": "An adaptive guidance system that supports equipment operators requires a comprehensive model of task and user behavior that considers different skill and knowledge levels as well as diverse situations. In the present paper, we introduced a novel method for machine operation modeling aimed to integrate visual operation records provided by users with different skills, knowledge levels , and interpersonal behavior patterns . For this purpose, we investigated the relationships between user behavior patterns that could be visually observed and their skill levels under machine operation conditions. We considered sixty samples of two sewing tasks performed by five operators using a head-mounted RGB-D camera and a static gaze tracker. We examined behavioral features, such as the operator gaze, head movements, and hand interactionswith hotspots, and observed significant behavioral changes as a result of continuous skill improvement. We automatically modeled the variety of behaviors of operation tasks with a two-step approach, prototype selection and experiences integration . The experimental results indicated that features, such as duration of task execution and user head movements, could serve as appropriate indices for skill level evaluation, and provide useful information for integrating various records corresponding to different skill levels and behavioral characteristics. Integrating operation records with operating habits allowed developing a rich inclusive task model that could be used to flexibly adapt to various user-specific behavior patterns .", "after_revision": "An adaptive guidance system that supports equipment operators requires a comprehensive model , which involves a variety of user behaviors that considers different skill and knowledge levels , as well as rapid-changing task situations. In the present paper, we introduced a novel method for modeling operational tasks, aiming to integrate visual operation records provided by users with diverse experience levels and personal characteristics . For this purpose, we investigated the relationships between user behavior patterns that could be visually observed and their skill levels under machine operation conditions. We considered 144 samples of two sewing tasks performed by 12 operators using a head-mounted RGB-D camera and a static gaze tracker. Behavioral features, such as the operator 's gaze and head movements, hand interactions, and hotspots, were observed with significant behavioral trends resulting from continuous user skill improvement. We used a two-step method to model the diversity of user behavior: prototype selection and experience integration based on skill ranking . The experimental results showed that several features could serve as appropriate indices for user skill evaluation, as well as providing valuable clues for revealing personal behavioral characteristics. The integration of user records with different skills and operational habits allowed developing a rich , inclusive task model that could be used flexibly to adapt to diverse user-specific needs .", "edit_actions": [{"type": "R", "before": "of task and user behavior", "after": ", which involves a variety of user behaviors", "start_char_pos": 93, "end_char_pos": 118}, {"type": "A", "before": null, "after": ",", "start_char_pos": 171, "end_char_pos": 171}, {"type": "R", "before": "diverse", "after": "rapid-changing task", "start_char_pos": 183, "end_char_pos": 190}, {"type": "R", "before": "machine operation modeling aimed", "after": "modeling operational tasks, aiming", "start_char_pos": 258, "end_char_pos": 290}, {"type": "R", "before": "different skills, knowledge levels , and interpersonal behavior patterns", "after": "diverse experience levels and personal characteristics", "start_char_pos": 352, "end_char_pos": 424}, {"type": "R", "before": "sixty", "after": "144", "start_char_pos": 615, "end_char_pos": 620}, {"type": "R", "before": "five", "after": "12", "start_char_pos": 662, "end_char_pos": 666}, {"type": "R", "before": "We examined behavioral", "after": "Behavioral", "start_char_pos": 738, "end_char_pos": 760}, {"type": "R", "before": "gaze,", "after": "'s gaze and", "start_char_pos": 792, "end_char_pos": 797}, {"type": "R", "before": "and hand interactionswith hotspots, and observed significant behavioral changes as a result of continuous", "after": "hand interactions, and hotspots, were observed with significant behavioral trends resulting from continuous user", "start_char_pos": 814, "end_char_pos": 919}, {"type": "R", "before": "automatically modeled the variety of behaviors of operation tasks with", "after": "used", "start_char_pos": 942, "end_char_pos": 1012}, {"type": "R", "before": "approach,", "after": "method to model the diversity of user behavior:", "start_char_pos": 1024, "end_char_pos": 1033}, {"type": "R", "before": "experiences integration", "after": "experience integration based on skill ranking", "start_char_pos": 1058, "end_char_pos": 1081}, {"type": "R", "before": "indicated that features, such as duration of task execution and user head movements,", "after": "showed that several features", "start_char_pos": 1109, "end_char_pos": 1193}, {"type": "R", "before": "skill level evaluation, and provide useful information for integrating various records corresponding to different skill levels and", "after": "user skill evaluation, as well as providing valuable clues for revealing personal", "start_char_pos": 1233, "end_char_pos": 1363}, {"type": "R", "before": "Integrating operation records with operating", "after": "The integration of user records with different skills and operational", "start_char_pos": 1392, "end_char_pos": 1436}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1470, "end_char_pos": 1470}, {"type": "R", "before": "to flexibly adapt to various", "after": "flexibly to adapt to diverse", "start_char_pos": 1511, "end_char_pos": 1539}, {"type": "R", "before": "behavior patterns", "after": "needs", "start_char_pos": 1554, "end_char_pos": 1571}], "sents_char_pos": [0, 202, 426, 600, 737, 938, 1083, 1391]} {"doc_id": "2003.03595", "revision_depth": "1", "before_revision": "We show that computing the Tutte polynomial of a linear matroid of dimension k on k^{O(1)} points over a field of k^{O(1)} elements requires k^{\\Omega(k)} time unless the ETH---a counting extension of the Exponential Time Hypothesis of Impagliazzo and Paturi [CCC 1999] due to Dell {\\em et al. [ACM TALG 2014]---is false. This holds also for linear matroids that admit a representation where every point is associated to a vector with at most two nonzero coordinates. Moreover, we also show that the same is true for computing the Tutte polynomial of a binary matroid of dimension k on k^{O(1)} points with at most three nonzero coordinates in each point's vector. These two results stand in sharp contrast to computing the Tutte polynomial of a k-vertex graph (that is, the Tutte polynomial of a graphic {\\em matroid of dimension k---which is representable in dimension k over the binary field so that every vector has exactly two nonzero coordinates), which is known to be computable in 2^k k^{O(1)} time [Bj\\\"orklund {\\em et al. , FOCS 2008]. Our lower-bound proofs proceed in three steps: (i) a classic connection due to Crapo and Rota [1970] between the number of tuples of codewords of full support and the Tutte polynomial of the matroid associated with the code; (ii) an earlier-established \\#ETH-hardness of counting the solutions to a bipartite (d,2)-CSP on n vertices in d^{o(n)} time; and (iii) new embeddings of such CSP instances as questions about codewords of full support in a linear code. We complement these lower bounds with a matching upper-bound algorithm design that computes the Tutte polynomial of a linear matroid of dimension k on k^{O(1)} points in k^{O(k)} arithmetic operationsin the base field{\\em .", "after_revision": "We show that computing the Tutte polynomial of a linear matroid of dimension k on k^{O(1)} points over a field of k^{O(1)} elements requires k^{\\Omega(k)} time unless the \\# ETH---a counting extension of the Exponential Time Hypothesis of Impagliazzo and Paturi [CCC 1999] due to Dell {\\em et al. [ACM TALG 2014]---is false. This holds also for linear matroids that admit a representation where every point is associated to a vector with at most two nonzero coordinates. We also show that the same is true for computing the Tutte polynomial of a binary matroid of dimension k on k^{O(1)} points with at most three nonzero coordinates in each point's vector. This is in sharp contrast to computing the Tutte polynomial of a k-vertex graph (that is, the Tutte polynomial of a {\\em graphic matroid of dimension k---which is representable in dimension k over the binary field so that every vector has two nonzero coordinates), which is known to be computable in 2^k k^{O(1)} time [Bj\\\"orklund {\\em et al. , FOCS 2008]. Our lower-bound proofs proceed via (i) a connection due to Crapo and Rota [1970] between the number of tuples of codewords of full support and the Tutte polynomial of the matroid associated with the code; (ii) an earlier-established \\#ETH-hardness of counting the solutions to a bipartite (d,2)-CSP on n vertices in d^{o(n)} time; and (iii) new embeddings of such CSP instances as questions about codewords of full support in a linear code. We complement these lower bounds with two algorithm designs. The first design computes the Tutte polynomial of a linear matroid of dimension ~ k on k^{O(1)} points in k^{O(k)} operations. The second design generalizes the Bj\\\"orklund~{\\em et al. algorithm and runs in q^{k+1 .", "edit_actions": [{"type": "A", "before": null, "after": "\\#", "start_char_pos": 171, "end_char_pos": 171}, {"type": "R", "before": "Moreover, we", "after": "We", "start_char_pos": 469, "end_char_pos": 481}, {"type": "R", "before": "These two results stand", "after": "This is", "start_char_pos": 666, "end_char_pos": 689}, {"type": "D", "before": "graphic", "after": null, "start_char_pos": 798, "end_char_pos": 805}, {"type": "A", "before": null, "after": "graphic", "start_char_pos": 811, "end_char_pos": 811}, {"type": "D", "before": "exactly", "after": null, "start_char_pos": 922, "end_char_pos": 929}, {"type": "R", "before": "in three steps:", "after": "via", "start_char_pos": 1079, "end_char_pos": 1094}, {"type": "D", "before": "classic", "after": null, "start_char_pos": 1101, "end_char_pos": 1108}, {"type": "R", "before": "a matching upper-bound algorithm design that", "after": "two algorithm designs. The first design", "start_char_pos": 1547, "end_char_pos": 1591}, {"type": "A", "before": null, "after": "~", "start_char_pos": 1655, "end_char_pos": 1655}, {"type": "R", "before": "arithmetic operationsin the base field", "after": "operations. The second design generalizes the Bj\\\"orklund~", "start_char_pos": 1689, "end_char_pos": 1727}, {"type": "A", "before": null, "after": "et al.", "start_char_pos": 1732, "end_char_pos": 1732}, {"type": "A", "before": null, "after": "algorithm and runs in q^{k+1", "start_char_pos": 1733, "end_char_pos": 1733}], "sents_char_pos": [0, 294, 322, 468, 665, 1047, 1272, 1398, 1508]} {"doc_id": "2003.03876", "revision_depth": "1", "before_revision": "In this article we propose and calculate, under the Black Scholes option pricing model, a measure of the relative value of a delta- Symmetric Strangle . The proposed measure accounts for the price of the strangle, relative to the (present value of the ) spread between the two strikes, all expressed, after a natural re-parameterization, in terms of delta and a volatility parameter. We show the startling main result that under the standard BS option pricing model, this measure of relative value is a function of delta only and is independent of the time to expiry, the price of the underlying security or the prevailing volatility used in the pricing model. {\\it In fact, the simple and intuitively appealing expression for this measure allows us to study the strangle's exit strategy and the corresponding optimal choices for values {\\it of delta.", "after_revision": "Trading option strangles is a highly popular strategy often used by market participants to mitigate volatility risks in their portfolios. In this paper we propose a measure of the relative value of a delta-Symmetric Strangle and compute it under the standard Black-Scholes option pricing model. This new measure accounts for the price of the strangle, relative to the Present Value of the spread between the two strikes, all expressed, after a natural re-parameterization, in terms of delta and a volatility parameter. We show that under the standard BS option pricing model, this measure of relative value is bounded by a simple function of delta only and is independent of the time to expiry, the price of the underlying security or the prevailing volatility used in the pricing model. We demonstrate how this bound can be used as a quick{\\it benchmark to assess, regardless the market volatility, the duration of the contract or the price of the underlying security, the market (relative) value of the \\delta-strangle in comparison to its BS (relative) price. In fact, the explicit and simple expression for this measure and bound allows us to also study in detail the strangle's exit strategy and the corresponding {\\it optimal choice for a value of delta.", "edit_actions": [{"type": "R", "before": "In this article we propose and calculate, under the Black Scholes option pricing model,", "after": "Trading option strangles is a highly popular strategy often used by market participants to mitigate volatility risks in their portfolios. In this paper we propose", "start_char_pos": 0, "end_char_pos": 87}, {"type": "R", "before": "delta- Symmetric Strangle . The proposed", "after": "delta-Symmetric Strangle and compute it under the standard Black-Scholes option pricing model. This new", "start_char_pos": 125, "end_char_pos": 165}, {"type": "R", "before": "(present value of the )", "after": "Present Value of the", "start_char_pos": 230, "end_char_pos": 253}, {"type": "D", "before": "the startling main result", "after": null, "start_char_pos": 392, "end_char_pos": 417}, {"type": "R", "before": "a", "after": "bounded by a simple", "start_char_pos": 501, "end_char_pos": 502}, {"type": "A", "before": null, "after": "We demonstrate how this bound can be used as a quick", "start_char_pos": 661, "end_char_pos": 661}, {"type": "A", "before": null, "after": "benchmark", "start_char_pos": 666, "end_char_pos": 666}, {"type": "A", "before": null, "after": "to assess, regardless the market volatility, the duration of the contract or the price of the underlying security, the market (relative) value of the \\delta-strangle in comparison to its BS (relative) price.", "start_char_pos": 667, "end_char_pos": 667}, {"type": "R", "before": "simple and intuitively appealing", "after": "explicit and simple", "start_char_pos": 681, "end_char_pos": 713}, {"type": "A", "before": null, "after": "and bound", "start_char_pos": 742, "end_char_pos": 742}, {"type": "R", "before": "study", "after": "also study in detail", "start_char_pos": 756, "end_char_pos": 761}, {"type": "D", "before": "optimal choices for values", "after": null, "start_char_pos": 813, "end_char_pos": 839}, {"type": "A", "before": null, "after": "optimal", "start_char_pos": 845, "end_char_pos": 845}, {"type": "A", "before": null, "after": "choice for a value", "start_char_pos": 846, "end_char_pos": 846}], "sents_char_pos": [0, 152, 383, 660]} {"doc_id": "2003.05582", "revision_depth": "1", "before_revision": "Twenty years ago, Bobkov, Houdr\\'e, and the last author introduced a Poincar\\'e-type functional graph parameter, \\lambda_\\infty (G) , of a graph G , and related it to the {\\em vertex expansion} of G via a Cheeger-type inequality . This is analogous to the Cheeger-type inequality relating the spectral gap , \\lambda_2 (G), of the graph to its {\\em edge expansion}. While \\lambda_2 can be computed efficiently, the computational complexity of \\lambda_\\infty has remained an open question. Following the work of the second author with Raghavendra and Vempala, wherein the complexity of \\lambda_\\infty was related to the so-called Small-Set Expansion (SSE) problem, it has been believed that computing \\lambda_\\infty is a hard problem. We settle this question by proving that computing \\lambda_\\infty is indeed NP-hard . Additionally, we use our techniques to prove NP-hardness of computing the spread constant (of a graph) \\emph{ , a geometric measure introduced by Alon, Boppana, and Spencer, in the context of deriving an asymptotic isoperimetric inequality on cartesian products of graphs. We complement our hardness results by providing approximation schemes for computing \\lambda_\\infty and the spread constant of star graphs, and investigate constant approximability for weighted trees. Finally{\\em , we provide improved approximation results for the maximum variance embedding problem for general graphs, by replacing the optimal orthogonal projection (PCA) with a randomized projection approach .", "after_revision": " Bobkov, Houdr\\'e, and the last author introduced a Poincar\\'e-type functional parameter, \\lambda_\\infty , of a graph G . They related \\lambda_\\infty to the {\\em vertex expansion} of the graph via a Cheeger-type inequality , analogous to the inequality relating the spectral gap of the graph , \\lambda_2 , to its {\\em edge expansion}. While \\lambda_2 can be computed efficiently, the computational complexity of \\lambda_\\infty has remained an open question. Following the work of the second author with Raghavendra and Vempala, wherein the complexity of \\lambda_\\infty was related to the so-called small-set expansion (SSE) problem, it has been believed that computing \\lambda_\\infty is a hard problem. We confirm this conjecture by proving that computing \\lambda_\\infty is indeed NP-hard , even for weighted trees. Our gadget further proves NP-hardness of computing \\emph{spread constant of a weighted tree; i.e. , a geometric measure of the graph, introduced by Alon, Boppana, and Spencer, in the context of deriving an asymptotic isoperimetric inequality of Cartesian products of graphs. We conclude this case by providing a fully polynomial time approximation scheme. We further study a generalization of spread constant in machine learning literature, namely the{\\em maximum variance embedding problem. For trees , we provide fast combinatorial algorithms that avoid solving a semidefinite relaxation of the problem. On the other hand, for general graphs, we propose a randomized projection method that can outperform the optimal orthogonal projection , i.e., PCA, classically used for rounding of the optimum lifted solution (to SDP relaxation) of the problem .", "edit_actions": [{"type": "D", "before": "Twenty years ago,", "after": null, "start_char_pos": 0, "end_char_pos": 17}, {"type": "D", "before": "graph", "after": null, "start_char_pos": 96, "end_char_pos": 101}, {"type": "D", "before": "(G)", "after": null, "start_char_pos": 128, "end_char_pos": 131}, {"type": "R", "before": ", and related it", "after": ". They related \\lambda_\\infty", "start_char_pos": 147, "end_char_pos": 163}, {"type": "R", "before": "G", "after": "the graph", "start_char_pos": 197, "end_char_pos": 198}, {"type": "R", "before": ". This is", "after": ",", "start_char_pos": 229, "end_char_pos": 238}, {"type": "D", "before": "Cheeger-type", "after": null, "start_char_pos": 256, "end_char_pos": 268}, {"type": "A", "before": null, "after": "of the graph", "start_char_pos": 306, "end_char_pos": 306}, {"type": "R", "before": "(G), of the graph", "after": ",", "start_char_pos": 319, "end_char_pos": 336}, {"type": "R", "before": "Small-Set Expansion", "after": "small-set expansion", "start_char_pos": 629, "end_char_pos": 648}, {"type": "R", "before": "settle this question", "after": "confirm this conjecture", "start_char_pos": 737, "end_char_pos": 757}, {"type": "R", "before": ". Additionally, we use our techniques to prove", "after": ", even for weighted trees. Our gadget further proves", "start_char_pos": 817, "end_char_pos": 863}, {"type": "D", "before": "the spread constant (of a graph)", "after": null, "start_char_pos": 889, "end_char_pos": 921}, {"type": "A", "before": null, "after": "spread constant", "start_char_pos": 928, "end_char_pos": 928}, {"type": "A", "before": null, "after": "of a weighted tree; i.e.", "start_char_pos": 929, "end_char_pos": 929}, {"type": "A", "before": null, "after": "of the graph,", "start_char_pos": 952, "end_char_pos": 952}, {"type": "R", "before": "on cartesian", "after": "of Cartesian", "start_char_pos": 1061, "end_char_pos": 1073}, {"type": "R", "before": "complement our hardness results by providing approximation schemes for computing \\lambda_\\infty and the spread constant of star graphs, and investigate constant approximability for weighted trees. Finally", "after": "conclude this case by providing a fully polynomial time approximation scheme. We further study a generalization of spread constant in machine learning literature, namely the", "start_char_pos": 1097, "end_char_pos": 1301}, {"type": "A", "before": null, "after": "maximum variance embedding", "start_char_pos": 1306, "end_char_pos": 1306}, {"type": "A", "before": null, "after": "problem. For trees", "start_char_pos": 1307, "end_char_pos": 1307}, {"type": "R", "before": "improved approximation results for the maximum variance embedding problem", "after": "fast combinatorial algorithms that avoid solving a semidefinite relaxation of the problem. On the other hand,", "start_char_pos": 1321, "end_char_pos": 1394}, {"type": "R", "before": "by replacing", "after": "we propose a randomized projection method that can outperform", "start_char_pos": 1415, "end_char_pos": 1427}, {"type": "R", "before": "(PCA) with a randomized projection approach", "after": ", i.e., PCA, classically used for rounding of the optimum lifted solution (to SDP relaxation) of the problem", "start_char_pos": 1462, "end_char_pos": 1505}], "sents_char_pos": [0, 230, 365, 488, 733, 818, 1093, 1293]} {"doc_id": "2003.07434", "revision_depth": "1", "before_revision": "Coronaviruses are a famous family of viruses that causes illness in human or animals. The new type of corona virus COVID-19 disease was firstly discovered in Wuhan-China . However, recently, the virus has been widely spread in most of the world countries and is reported as a pandemic . Further, nowadays, all the world countries are striving to control the coronavirus disease COVID-19. There are many mechanisms to detect the coronavirus disease COVID-19 including clinical analysis of chest CT scan images and blood test results. The confirmed COVID-19 patient manifests as fever, tiredness, and dry cough. Particularly, several techniques can be used to detect the initial results of the virus such as medical detection Kits. However, such devices are incurring huge cost and it takes time to install them and use. Therefore, in this paper, a new framework is proposed to detect coronavirus disease COVID-19 using onboard smartphone sensors. The proposal provides a low-cost solution, since most of the radiologists have already held smartphones for different daily-purposes. People can use the framework on their smartphones for the virus detection purpose. Nowadays , smartphones are powerful with existing computation-rich processors, memory space, and large number of sensors including cameras, microphone, temperature sensor, inertial sensors, proximity, colour-sensor, humidity-sensor, and wireless chipsets/sensors. The designed Artificial Intelligence (AI) enabled framework reads the smartphone sensors signal measurements to predict the grade of severity of the pneumonia as well as predicting the result of the disease.", "after_revision": "Coronaviruses are a famous family of viruses that cause illness in both humans and animals. The new type of coronavirus COVID-19 was firstly discovered in Wuhan, China . However, recently, the virus has widely spread in most of the world and causing a pandemic according to the World Health Organization (WHO) . Further, nowadays, all the world countries are striving to control the COVID-19. There are many mechanisms to detect coronavirus including clinical analysis of chest CT scan images and blood test results. The confirmed COVID-19 patient manifests as fever, tiredness, and dry cough. Particularly, several techniques can be used to detect the initial results of the virus such as medical detection Kits. However, such devices are incurring huge cost , taking time to install them and use. Therefore, in this paper, a new framework is proposed to detect COVID-19 using built-in smartphone sensors. The proposal provides a low-cost solution, since most of radiologists have already held smartphones for different daily-purposes. Not only that but also ordinary people can use the framework on their smartphones for the virus detection purposes. Nowadays Smartphones are powerful with existing computation-rich processors, memory space, and large number of sensors including cameras, microphone, temperature sensor, inertial sensors, proximity, colour-sensor, humidity-sensor, and wireless chipsets/sensors. The designed Artificial Intelligence (AI) enabled framework reads the smartphone sensors signal measurements to predict the grade of severity of the pneumonia as well as predicting the result of the disease.", "edit_actions": [{"type": "R", "before": "causes illness in human or", "after": "cause illness in both humans and", "start_char_pos": 50, "end_char_pos": 76}, {"type": "R", "before": "corona virus", "after": "coronavirus", "start_char_pos": 102, "end_char_pos": 114}, {"type": "D", "before": "disease", "after": null, "start_char_pos": 124, "end_char_pos": 131}, {"type": "R", "before": "Wuhan-China", "after": "Wuhan, China", "start_char_pos": 158, "end_char_pos": 169}, {"type": "D", "before": "been", "after": null, "start_char_pos": 205, "end_char_pos": 209}, {"type": "R", "before": "countries and is reported as a pandemic", "after": "and causing a pandemic according to the World Health Organization (WHO)", "start_char_pos": 245, "end_char_pos": 284}, {"type": "D", "before": "coronavirus disease", "after": null, "start_char_pos": 358, "end_char_pos": 377}, {"type": "R", "before": "the coronavirus disease COVID-19", "after": "coronavirus", "start_char_pos": 424, "end_char_pos": 456}, {"type": "R", "before": "and it takes", "after": ", taking", "start_char_pos": 776, "end_char_pos": 788}, {"type": "D", "before": "coronavirus disease", "after": null, "start_char_pos": 883, "end_char_pos": 902}, {"type": "R", "before": "onboard", "after": "built-in", "start_char_pos": 918, "end_char_pos": 925}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1003, "end_char_pos": 1006}, {"type": "R", "before": "People", "after": "Not only that but also ordinary people", "start_char_pos": 1080, "end_char_pos": 1086}, {"type": "R", "before": "purpose. Nowadays , smartphones", "after": "purposes. Nowadays Smartphones", "start_char_pos": 1154, "end_char_pos": 1185}], "sents_char_pos": [0, 85, 171, 286, 387, 532, 609, 729, 818, 945, 1079, 1162, 1426]} {"doc_id": "2003.08868", "revision_depth": "2", "before_revision": "Here we focus on the data analysis of the growth of epidemic spreading of Covid-19 in countries where different policies of containment have been activated. It is known that the growth of the pandemic spreading at its threshold is exponential but it is not known how to quantify the success of different containment policies. We identify that a successful approach gives an arrested phase regime following the Ostwald growth, where in the course of time one phase transforms into another metastable phase with a similar free energy as observed in oxygen interstitials diffusion in quantum complex matter and in crystallization of proteins. We introduce the s factor which provides a quantitative measure of the efficiency and speed of the adopted containment policy, which is very helpful not only to monitor the Covid-19 pandemic spreading but also for other countries to choose the best containment policy. The results show that the policy based in joint confinement, targeted tests and tracking positive cases is the most rapid pandemic containment policy in fact we have found for China, South Korea, and Italy the values of the success s factor 9, 5, 32 respectively where the lowest s value indicates the best containment policy .", "after_revision": "Here , we focus on the data analysis of the growth of epidemic spread of Covid-19 in countries where different policies of containment were activated. It is known that the growth of pandemic spread at its threshold is exponential , but it is not known how to quantify the success of different containment policies. We identify that a successful approach gives an arrested phase regime following the Ostwald growth, where , over the course of time , one phase transforms into another metastable phase with a similar free energy as observed in oxygen interstitial diffusion in quantum complex matter and in crystallization of proteins. We introduce the s factor which provides a quantitative measure of the efficiency and speed of the adopted containment policy, which is very helpful not only to monitor the Covid-19 pandemic spread but also for other countries to choose the best containment policy. The results show that a policy based on joint confinement, targeted tests , and tracking positive cases is the most rapid pandemic containment policy ; in fact, we found values of 9, 5, and 31 for the success s factor for China, South Korea, and Italy, respectively, where the lowest s factor indicates the best containment policy ", "edit_actions": [{"type": "A", "before": null, "after": ",", "start_char_pos": 5, "end_char_pos": 5}, {"type": "R", "before": "spreading", "after": "spread", "start_char_pos": 62, "end_char_pos": 71}, {"type": "R", "before": "have been", "after": "were", "start_char_pos": 137, "end_char_pos": 146}, {"type": "R", "before": "the pandemic spreading", "after": "pandemic spread", "start_char_pos": 189, "end_char_pos": 211}, {"type": "A", "before": null, "after": ",", "start_char_pos": 244, "end_char_pos": 244}, {"type": "R", "before": "in", "after": ", over", "start_char_pos": 434, "end_char_pos": 436}, {"type": "A", "before": null, "after": ",", "start_char_pos": 456, "end_char_pos": 456}, {"type": "R", "before": "interstitials", "after": "interstitial", "start_char_pos": 557, "end_char_pos": 570}, {"type": "R", "before": "spreading", "after": "spread", "start_char_pos": 834, "end_char_pos": 843}, {"type": "R", "before": "the policy based in", "after": "a policy based on", "start_char_pos": 934, "end_char_pos": 953}, {"type": "A", "before": null, "after": ",", "start_char_pos": 988, "end_char_pos": 988}, {"type": "R", "before": "in fact we have found for China, South Korea, and Italy the values of the", "after": "; in fact, we found values of 9, 5, and 31 for the", "start_char_pos": 1063, "end_char_pos": 1136}, {"type": "R", "before": "9, 5, 32 respectively", "after": "for China, South Korea, and Italy, respectively,", "start_char_pos": 1154, "end_char_pos": 1175}, {"type": "R", "before": "value", "after": "factor", "start_char_pos": 1195, "end_char_pos": 1200}, {"type": "D", "before": ".", "after": null, "start_char_pos": 1239, "end_char_pos": 1240}], "sents_char_pos": [0, 157, 327, 642, 911]} {"doc_id": "2003.10086", "revision_depth": "1", "before_revision": "We analyze the spread of COVID-19 by considering the transmission of the disease among individuals both within and between communities . A set of communities can be defined as any partition of a population such that travel/social contact within each community far exceeds that between them (e. g. the U.S. could be partitioned by state or commuting zone boundaries). COVID-19 can be eliminated if the community-to-community reproductive number---i.e. the expected/ average number of other communities to which a single infected community will transmit the virus---is reduced to less than one. We find that this community-to-community reproductive number is proportional to the travel rate between communities and exponential in the length of the time-delay before community-level action is taken . Thus, reductions in travel and the speed at which communities take action can play decisive roles in stopping the outbreak. The analysis suggests that for the coronavirus to be eliminated, it is not necessary to impose aggressive social distancing measures all over the world at once, but rather only in communities in which active spreading is detected. The sooner such measures are imposed, the shorter the duration they must remain in place. Ifinfected communities (including those that become re-infected in the future) are quick enough to act , the number of actively infected communities ( and thus the number of communities in which such measures are required ) will exponentially decrease over time .", "after_revision": "We analyze the spread of COVID-19 by considering the transmission of the disease among individuals both within and between regions . A set of regions can be defined as any partition of a population such that travel/social contact within each region far exceeds that between them . COVID-19 can be eliminated if the region-to-region reproductive number---i.e. the average number of other regions to which a single infected region will transmit the virus---is reduced to less than one. We find that this region-to-region reproductive number is proportional to the travel rate between regions and exponential in the length of the time-delay before region-level control measures are imposed . Thus, reductions in travel and the speed with which regions take action play decisive roles in whether COVID-19 is eliminated from a collection of regions. If, on average, infected regions (including those that become re-infected in the future) impose social distancing measures shortly after active spreading begins within them , the number of infected regions, and thus the number of regions in which such measures are required , will exponentially decrease over time . Elimination will in this case be a stable fixed point even after the social distancing measures have been lifted from most of the regions .", "edit_actions": [{"type": "R", "before": "communities", "after": "regions", "start_char_pos": 123, "end_char_pos": 134}, {"type": "R", "before": "communities", "after": "regions", "start_char_pos": 146, "end_char_pos": 157}, {"type": "R", "before": "community", "after": "region", "start_char_pos": 250, "end_char_pos": 259}, {"type": "R", "before": "(e. g. the U.S. could be partitioned by state or commuting zone boundaries).", "after": ".", "start_char_pos": 290, "end_char_pos": 366}, {"type": "R", "before": "community-to-community", "after": "region-to-region", "start_char_pos": 401, "end_char_pos": 423}, {"type": "D", "before": "expected/", "after": null, "start_char_pos": 455, "end_char_pos": 464}, {"type": "R", "before": "communities", "after": "regions", "start_char_pos": 489, "end_char_pos": 500}, {"type": "R", "before": "community", "after": "region", "start_char_pos": 528, "end_char_pos": 537}, {"type": "R", "before": "community-to-community", "after": "region-to-region", "start_char_pos": 611, "end_char_pos": 633}, {"type": "R", "before": "communities", "after": "regions", "start_char_pos": 697, "end_char_pos": 708}, {"type": "R", "before": "community-level action is taken", "after": "region-level control measures are imposed", "start_char_pos": 764, "end_char_pos": 795}, {"type": "R", "before": "at which communities take action can", "after": "with which regions take action", "start_char_pos": 839, "end_char_pos": 875}, {"type": "R", "before": "stopping the outbreak. The analysis suggests that for the coronavirus to be eliminated, it is not necessary to impose aggressive social distancing measures all over the world at once, but rather only in communities in which active spreading is detected. The sooner such measures are imposed, the shorter the duration they must remain in place. Ifinfected communities", "after": "whether COVID-19 is eliminated from a collection of regions. If, on average, infected regions", "start_char_pos": 899, "end_char_pos": 1265}, {"type": "R", "before": "are quick enough to act", "after": "impose social distancing measures shortly after active spreading begins within them", "start_char_pos": 1322, "end_char_pos": 1345}, {"type": "R", "before": "actively infected communities (", "after": "infected regions,", "start_char_pos": 1362, "end_char_pos": 1393}, {"type": "R", "before": "communities", "after": "regions", "start_char_pos": 1417, "end_char_pos": 1428}, {"type": "R", "before": ")", "after": ",", "start_char_pos": 1465, "end_char_pos": 1466}, {"type": "A", "before": null, "after": ". Elimination will in this case be a stable fixed point even after the social distancing measures have been lifted from most of the regions", "start_char_pos": 1505, "end_char_pos": 1505}], "sents_char_pos": [0, 136, 366, 592, 797, 921, 1152, 1242]} {"doc_id": "2003.10525", "revision_depth": "1", "before_revision": "Policy evaluation studies, which aim to assess the effect of an intervention, imply some statistical challenges: real-world scenarios provide treatments which have not been assigned randomly and the analysis might be further complicated by the presence of interference between units. Researchers have started to develop novel methods that allow to manage spillover mechanisms in observational studies , under binary treatments. But many policy evaluation studies require complex treatments, such as multi-valued treatments . For instance, in political sciences , evaluating the impact of policies implemented by administrative entities often implies a multi-valued approach, as the general political stance towards a specific issue varies over many dimensions. In this work, we extend the statistical framework about causal inference under network interference in observational studies, allowing for a multi-valued individual treatment and an interference structure shaped by a weighted network. Under multi-valued treatment, each unit is exposed to all levels of the treatment, due to the influence of his neighbors, according to the network weights. The estimation strategy is based on a joint multiple generalized propensity score and allows to estimate direct effects, controlling for both individual and network covariates. We follow the proposed methodology to analyze the impact of national immigration policy on crime rates . We define a multi-valued characterization of political attitudes towards migrants and we assume that the extent to which each country can be influenced by another is modeled by an appropriate indicator, that we call Interference Compound Index (ICI) . Results suggest that implementing highly restrictive immigration policies leads to an increase of crime rates and the magnitude of estimated effects is stronger if we take into account multi-valued interference .", "after_revision": "Policy evaluation studies, which intend to assess the effect of an intervention, face some statistical challenges: in real-world settings treatments are not randomly assigned and the analysis might be further complicated by the presence of interference between units. Researchers have started to develop novel methods that allow to manage spillover mechanisms in observational studies ; recent works focus primarily on binary treatments. However, many policy evaluation studies deal with more complex interventions . For instance, in political science , evaluating the impact of policies implemented by administrative entities often implies a multivariate approach, as a policy towards a specific issue operates at many different levels and can be defined along a number of dimensions. In this work, we extend the statistical framework about causal inference under network interference in observational studies, allowing for a multi-valued individual treatment and an interference structure shaped by a weighted network. The estimation strategy is based on a joint multiple generalized propensity score and allows one to estimate direct effects, controlling for both individual and network covariates. We follow the proposed methodology to analyze the impact of the national immigration policy on the crime rate . We define a multi-valued characterization of political attitudes towards migrants and we assume that the extent to which each country can be influenced by another country is modeled by an appropriate indicator, summarizing their cultural and geographical proximity . Results suggest that implementing a highly restrictive immigration policy leads to an increase of the crime rate and the estimated effects is larger if we take into account interference from other countries .", "edit_actions": [{"type": "R", "before": "aim", "after": "intend", "start_char_pos": 33, "end_char_pos": 36}, {"type": "R", "before": "imply", "after": "face", "start_char_pos": 78, "end_char_pos": 83}, {"type": "A", "before": null, "after": "in", "start_char_pos": 113, "end_char_pos": 113}, {"type": "R", "before": "scenarios provide treatments which have not been assigned randomly", "after": "settings treatments are not randomly assigned", "start_char_pos": 125, "end_char_pos": 191}, {"type": "R", "before": ", under", "after": "; recent works focus primarily on", "start_char_pos": 402, "end_char_pos": 409}, {"type": "R", "before": "But", "after": "However,", "start_char_pos": 429, "end_char_pos": 432}, {"type": "R", "before": "require complex treatments, such as multi-valued treatments", "after": "deal with more complex interventions", "start_char_pos": 464, "end_char_pos": 523}, {"type": "R", "before": "sciences", "after": "science", "start_char_pos": 553, "end_char_pos": 561}, {"type": "R", "before": "multi-valued", "after": "multivariate", "start_char_pos": 653, "end_char_pos": 665}, {"type": "R", "before": "the general political stance", "after": "a policy", "start_char_pos": 679, "end_char_pos": 707}, {"type": "R", "before": "varies over many", "after": "operates at many different levels and can be defined along a number of", "start_char_pos": 733, "end_char_pos": 749}, {"type": "D", "before": "Under multi-valued treatment, each unit is exposed to all levels of the treatment, due to the influence of his neighbors, according to the network weights.", "after": null, "start_char_pos": 997, "end_char_pos": 1152}, {"type": "A", "before": null, "after": "one", "start_char_pos": 1246, "end_char_pos": 1246}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1391, "end_char_pos": 1391}, {"type": "R", "before": "crime rates", "after": "the crime rate", "start_char_pos": 1423, "end_char_pos": 1434}, {"type": "A", "before": null, "after": "country", "start_char_pos": 1600, "end_char_pos": 1600}, {"type": "R", "before": "that we call Interference Compound Index (ICI)", "after": "summarizing their cultural and geographical proximity", "start_char_pos": 1641, "end_char_pos": 1687}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1724, "end_char_pos": 1724}, {"type": "R", "before": "policies", "after": "policy", "start_char_pos": 1756, "end_char_pos": 1764}, {"type": "R", "before": "crime rates and the magnitude of", "after": "the crime rate and the", "start_char_pos": 1789, "end_char_pos": 1821}, {"type": "R", "before": "stronger", "after": "larger", "start_char_pos": 1843, "end_char_pos": 1851}, {"type": "R", "before": "multi-valued interference", "after": "interference from other countries", "start_char_pos": 1876, "end_char_pos": 1901}], "sents_char_pos": [0, 284, 428, 525, 761, 996, 1152, 1330, 1436]} {"doc_id": "2003.11021", "revision_depth": "2", "before_revision": "The global spread of 2019-nCoV, a new virus belonging to the coronavirus family, forced national and local governments to apply different sets of measures aimed at containing its outbreak. Los Angeles has been one of the first cities in the United States to declare the state of emergency on March 4th, progressively issuing stronger policies involving--among the others--social distancing, the prohibition of crowded private and public gatherings and closure of leisure premises. These interventions highly disrupt and modify daily activities and habits, urban mobility and micro-level interactions between citizens. One of the many social phenomena that could be influenced by such measures is crime. Exploiting public data on crime in Los Angeles, and relying on routine activity and pattern theories of crime, this work investigates whether and how new coronavirus containment policies have an impact on crime trends in a metropolis using Bayesian structural time-series (BSTS) models. The article specifically focuses on nine urban crime categories , daily monitored from January 1st 2017 to March 28th 2020. The analyses have been updated bi-weekly (up to March 16 and up to March 28 2020 ) to dynamically assess the short-term effects of mild and hard interventions to shed light on how crime adapts to such structural modification of the environment. The results show that overall crime in Las Angeles is significantly decreasing , as well as robbery, shoplifting, theft and battery. No significant effect has been found for stolen vehicle , burglary, assault with deadly weapon, intimate partner violence and homicide. In the last section of this article, policy implications are also discussed.", "after_revision": "This work investigates whether and how COVID-19 containment policies had an immediate impact on crime trends in Los Angeles. The analysis is conducted using Bayesian structural time-series and focuses on nine crime categories and on the overall crime count , daily monitored from January 1st 2017 to March 28th 2020. We concentrate on two post-intervention time windows - from March 4th to March 16th and from March 4th to March 28th 2020 - to dynamically assess the short-term effects of mild and strict policies. In Los Angeles, overall crime has significantly decreased , as well as robbery, shoplifting, theft , and battery. No significant effect has been detected for vehicle theft , burglary, assault with a deadly weapon, intimate partner assault, and homicide. Results suggest that, in the first weeks after the interventions are put in place, social distancing impacts more directly on instrumental and less serious crimes. Policy implications are also discussed.", "edit_actions": [{"type": "R", "before": "The global spread of 2019-nCoV, a new virus belonging to the coronavirus family, forced national and local governments to apply different sets of measures aimed at containing its outbreak. Los Angeles has been one of the first cities in the United States to declare the state of emergency on March 4th, progressively issuing stronger policies involving--among the others--social distancing, the prohibition of crowded private and public gatherings and closure of leisure premises. These interventions highly disrupt and modify daily activities and habits, urban mobility and micro-level interactions between citizens. One of the many social phenomena that could be influenced by such measures is crime. Exploiting public data on crime in Los Angeles, and relying on routine activity and pattern theories of crime, this", "after": "This", "start_char_pos": 0, "end_char_pos": 818}, {"type": "R", "before": "new coronavirus containment policies have an", "after": "COVID-19 containment policies had an immediate", "start_char_pos": 853, "end_char_pos": 897}, {"type": "R", "before": "a metropolis", "after": "Los Angeles. The analysis is conducted", "start_char_pos": 924, "end_char_pos": 936}, {"type": "R", "before": "(BSTS) models. The article specifically", "after": "and", "start_char_pos": 975, "end_char_pos": 1014}, {"type": "R", "before": "urban crime categories", "after": "crime categories and on the overall crime count", "start_char_pos": 1031, "end_char_pos": 1053}, {"type": "D", "before": "The analyses have been updated bi-weekly (up to March 16", "after": null, "start_char_pos": 1114, "end_char_pos": 1170}, {"type": "R", "before": "and up to March 28", "after": "We concentrate on two post-intervention time windows - from March 4th to March 16th and from March 4th to March 28th", "start_char_pos": 1171, "end_char_pos": 1189}, {"type": "R", "before": ")", "after": "-", "start_char_pos": 1195, "end_char_pos": 1196}, {"type": "R", "before": "hard interventions to shed light on how crime adapts to such structural modification of the environment. The results show that overall crime in Las Angeles is significantly decreasing", "after": "strict policies. In Los Angeles, overall crime has significantly decreased", "start_char_pos": 1254, "end_char_pos": 1437}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1479, "end_char_pos": 1479}, {"type": "R", "before": "found for stolen vehicle", "after": "detected for vehicle theft", "start_char_pos": 1524, "end_char_pos": 1548}, {"type": "A", "before": null, "after": "a", "start_char_pos": 1574, "end_char_pos": 1574}, {"type": "R", "before": "violence", "after": "assault,", "start_char_pos": 1607, "end_char_pos": 1615}, {"type": "R", "before": "In the last section of this article, policy", "after": "Results suggest that, in the first weeks after the interventions are put in place, social distancing impacts more directly on instrumental and less serious crimes. Policy", "start_char_pos": 1630, "end_char_pos": 1673}], "sents_char_pos": [0, 188, 480, 617, 702, 989, 1113, 1358, 1492, 1629]} {"doc_id": "2003.11238", "revision_depth": "1", "before_revision": "We propose and analyze zeroth-order algorithms for optimization over Riemannian manifolds, where we observe only potentially noisy evaluations of the objective function. Our approach is based on estimating the Riemannian gradient from the objective function evaluations . We consider three settings for the objective function: (i) deterministic and smooth , (ii) stochastic and smooth , and (iii) composition of smooth and non-smooth parts. For each of the setting , we characterize the oracle complexity of our algorithm to obtain appropriately defined notions of \\epsilon-stationary points . Notably, our complexities are independent of the ambient dimension of the Euclidean space in which the manifold is embedded in, and only depend on the intrinsic dimension of the manifold . As a proof of concept, we demonstrate the applicability of our method to the problem of black-box attacks to deep neural networks, by providing simulation and real-world image data based experimental results.", "after_revision": "Stochastic zeroth-order optimization concerns problems where only noisy function evaluations are available. Such problems arises frequently in many important applications. In this paper, we consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in an Euclidean space, an important but less studied area, and propose four algorithms for solving this class of problems under different settings. Our algorithms are based on estimating the Riemannian gradient and Hessian from noisy objective function evaluations , based on a Riemannian version of the Gaussian smoothing technique. In particular, we consider the following settings for the objective function: (i) stochastic and gradient-Lipschitz (in both nonconvex and geodesic convex settings) , (ii) sum of gradient-Lipschitz and non-smooth functions , and (iii) Hessian-Lipschitz. For these settings , we characterize the oracle complexity of our algorithms to obtain appropriately defined notions of \\epsilon-stationary point or \\epsilon-approximate local minimizer . Notably, our complexities are independent of the dimension of the ambient Euclidean space and depend only on the intrinsic dimension of the manifold under consideration. We demonstrate the applicability of our algorithms by simulation results.", "edit_actions": [{"type": "R", "before": "We propose and analyze", "after": "Stochastic", "start_char_pos": 0, "end_char_pos": 22}, {"type": "R", "before": "algorithms for optimization over Riemannian manifolds, where we observe only potentially noisy evaluations of the objective function. Our approach is", "after": "optimization concerns problems where only noisy function evaluations are available. Such problems arises frequently in many important applications. In this paper, we consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in an Euclidean space, an important but less studied area, and propose four algorithms for solving this class of problems under different settings. Our algorithms are", "start_char_pos": 36, "end_char_pos": 185}, {"type": "R", "before": "from the", "after": "and Hessian from noisy", "start_char_pos": 230, "end_char_pos": 238}, {"type": "R", "before": ". We consider three", "after": ", based on a Riemannian version of the Gaussian smoothing technique. In particular, we consider the following", "start_char_pos": 270, "end_char_pos": 289}, {"type": "R", "before": "deterministic and smooth", "after": "stochastic and gradient-Lipschitz (in both nonconvex and geodesic convex settings)", "start_char_pos": 331, "end_char_pos": 355}, {"type": "R", "before": "stochastic and smooth", "after": "sum of gradient-Lipschitz and non-smooth functions", "start_char_pos": 363, "end_char_pos": 384}, {"type": "R", "before": "composition of smooth and non-smooth parts. For each of the setting", "after": "Hessian-Lipschitz. For these settings", "start_char_pos": 397, "end_char_pos": 464}, {"type": "R", "before": "algorithm", "after": "algorithms", "start_char_pos": 512, "end_char_pos": 521}, {"type": "R", "before": "points", "after": "point or \\epsilon-approximate local minimizer", "start_char_pos": 585, "end_char_pos": 591}, {"type": "D", "before": "ambient", "after": null, "start_char_pos": 643, "end_char_pos": 650}, {"type": "R", "before": "Euclidean space in which the manifold is embedded in, and only depend", "after": "ambient Euclidean space and depend only", "start_char_pos": 668, "end_char_pos": 737}, {"type": "R", "before": ". As a proof of concept, we", "after": "under consideration. We", "start_char_pos": 781, "end_char_pos": 808}, {"type": "R", "before": "method to the problem of black-box attacks to deep neural networks, by providing simulation and real-world image data based experimental", "after": "algorithms by simulation", "start_char_pos": 846, "end_char_pos": 982}], "sents_char_pos": [0, 169, 271, 440, 782]} {"doc_id": "2003.12432", "revision_depth": "2", "before_revision": "While the coronavirus spreads around the world , governments are attempting to reduce contagion rates at the expense of negative economic effects. Market expectations have plummeted, foreshadowing the risk of a global economic crisis and mass unemployment. Governments provide huge financial aid programmes to mitigate the expected economic shocks. To achieve higher effectiveness with cyclical and fiscal policy measures, it is key to identify the industries that are most in need of support. In this study, we introduce a data-mining approach to measure the industry-specific risks related to COVID-19. We examine company risk reports filed to the U.S. Securities and Exchange Commission (SEC). This data set allows for a real-time analysis of risk assessments. Preliminary findings suggest that the companies' awareness towards corona-related business risks is ahead of the overall stock market developments by weeks. The risk reports differ substantially between industries , both in magnitude and in nature . Based on natural language processing techniques, we can identify corona-related risk topics and their perceived relevance for different industries. Our approach allows to distinguish the industries by their reported risk awareness towards COVID-19. The preliminary findings are summarised an online index. The CoRisk-Index tracks the industry-specific risk assessments related to the crisis, as it spreads through the economy. The tracking tool could provide relevant empirical data to inform models on the immediate economic effects of the crisis. Such complementary empirical information could help policy-makers to effectively target financial support and to mitigate the economic shocks of the current crisis.", "after_revision": "While the coronavirus spreads , governments are attempting to reduce contagion rates at the expense of negative economic effects. Market expectations plummeted, foreshadowing the risk of a global economic crisis and mass unemployment. Governments provide huge financial aid programmes to mitigate the economic shocks. To achieve higher effectiveness with such policy measures, it is key to identify the industries that are most in need of support. In this study, we introduce a data-mining approach to measure industry-specific risks related to COVID-19. We examine company risk reports filed to the U.S. Securities and Exchange Commission (SEC). This alternative data set can complement more traditional economic indicators in times of the fast-evolving crisis as it allows for a real-time analysis of risk assessments. Preliminary findings suggest that the companies' awareness towards corona-related business risks is ahead of the overall stock market developments . Our approach allows to distinguish the industries by their risk awareness towards COVID-19. Based on natural language processing , we identify corona-related risk topics and their perceived relevance for different industries. The preliminary findings are summarised as an up-to-date online index. The CoRisk-Index tracks the industry-specific risk assessments related to the crisis, as it spreads through the economy. The tracking tool is updated weekly. It could provide relevant empirical data to inform models on the economic effects of the crisis. Such complementary empirical information could ultimately help policymakers to effectively target financial support in order to mitigate the economic shocks of the crisis.", "edit_actions": [{"type": "D", "before": "around the world", "after": null, "start_char_pos": 30, "end_char_pos": 46}, {"type": "D", "before": "have", "after": null, "start_char_pos": 167, "end_char_pos": 171}, {"type": "D", "before": "expected", "after": null, "start_char_pos": 323, "end_char_pos": 331}, {"type": "R", "before": "cyclical and fiscal", "after": "such", "start_char_pos": 386, "end_char_pos": 405}, {"type": "D", "before": "the", "after": null, "start_char_pos": 556, "end_char_pos": 559}, {"type": "R", "before": "data set", "after": "alternative data set can complement more traditional economic indicators in times of the fast-evolving crisis as it", "start_char_pos": 702, "end_char_pos": 710}, {"type": "D", "before": "by weeks. The risk reports differ substantially between industries , both in magnitude and in nature", "after": null, "start_char_pos": 911, "end_char_pos": 1011}, {"type": "A", "before": null, "after": "Our approach allows to distinguish the industries by their risk awareness towards COVID-19.", "start_char_pos": 1014, "end_char_pos": 1014}, {"type": "R", "before": "techniques, we can", "after": ", we", "start_char_pos": 1052, "end_char_pos": 1070}, {"type": "D", "before": "Our approach allows to distinguish the industries by their reported risk awareness towards COVID-19.", "after": null, "start_char_pos": 1163, "end_char_pos": 1263}, {"type": "R", "before": "an", "after": "as an up-to-date", "start_char_pos": 1304, "end_char_pos": 1306}, {"type": "A", "before": null, "after": "is updated weekly. It", "start_char_pos": 1460, "end_char_pos": 1460}, {"type": "D", "before": "immediate", "after": null, "start_char_pos": 1523, "end_char_pos": 1532}, {"type": "R", "before": "help policy-makers", "after": "ultimately help policymakers", "start_char_pos": 1612, "end_char_pos": 1630}, {"type": "R", "before": "and", "after": "in order", "start_char_pos": 1671, "end_char_pos": 1674}, {"type": "D", "before": "current", "after": null, "start_char_pos": 1714, "end_char_pos": 1721}], "sents_char_pos": [0, 146, 256, 348, 493, 604, 696, 763, 920, 1162, 1263, 1320, 1441, 1564]} {"doc_id": "2003.13041", "revision_depth": "2", "before_revision": "L\\'evy walks are random walks processes whose step length follows a long-tailed power law distribution. Due to their abundance as movement patterns of URLanisms, significant theoretical efforts have been devoted to identify the foraging circumstances that would make such patterns advantageous Viswanathan et al. Nature, 1999 . Despite numerous attempts , there is currently no conclusive analytical evidence that L\\'evy flights are preferable strategies in higher dimensions than one. Here we show that the optimality of inverse-square L\\'evy walks in two-dimensions becomes striking when targets are sparse and unpredictable in size, and when detection is weak. Specifically, we prove that under the intermittent model, in which detection is not possible while moving ballistically B\\'enichou et al. Reviews of Modern Physics, 2011%DIFDELCMD < ]%%% , this strategy optimally finds sparse targets of any size and shape. That is, in a square torus of area~n, and assuming that the detection radius is normalized to 1, the strategy finds any connected set of diameter D in%DIFDELCMD < \\tilde{O}%%% (n/D) expected time, whereas \\Omega(n/D) is an unconditional lower bound on the expected time, that holds even when assuming that the shape and size of the target are known. Furthermore, this particular L\\'evy process stands in stark contrast to many other basic intermittent processes, including all other L\\'evy walks, whose competitive ratio is shown to be polynomial in n, for wide ranges of target scales . Our results thus provide strong theoretical support for the optimality and robustness of intermittent\\emph{ L\\'evy walks under general conditions .", "after_revision": "L\\'evy walks are random walk processes whose step-lengths follow a long-tailed power-law distribution. Due to their abundance as movement patterns of URLanisms, significant theoretical efforts have been devoted to identifying the foraging circumstances that would make such patterns advantageous . However, despite extensive research , there is currently no mathematical proof indicating that L\\'evy walks are, in any manner, preferable strategies in higher dimensions than one. Here we prove that in finite two-dimensional terrains, the inverse-square L\\'evy %DIFDELCMD < ]%%% walk strategy is extremely efficient at finding sparse targets of arbitrary size and shape. %DIFDELCMD < \\tilde{O}%%% Moreover, this holds even under the weak model of intermittent detection. Conversely, any other intermittent L\\'evy walk fails to efficiently find either large targets or small ones . Our results shed new light on the\\emph{L\\'evy foraging hypothesis , and are thus expected to impact future experiments on animals performing L\\'evy walks .", "edit_actions": [{"type": "R", "before": "walks processes whose step length follows", "after": "walk processes whose step-lengths follow", "start_char_pos": 24, "end_char_pos": 65}, {"type": "R", "before": "power law", "after": "power-law", "start_char_pos": 80, "end_char_pos": 89}, {"type": "R", "before": "identify", "after": "identifying", "start_char_pos": 215, "end_char_pos": 223}, {"type": "D", "before": "Viswanathan et al. Nature, 1999", "after": null, "start_char_pos": 294, "end_char_pos": 325}, {"type": "R", "before": ". Despite numerous attempts", "after": ". However, despite extensive research", "start_char_pos": 326, "end_char_pos": 353}, {"type": "R", "before": "conclusive analytical evidence", "after": "mathematical proof indicating", "start_char_pos": 378, "end_char_pos": 408}, {"type": "R", "before": "flights are", "after": "walks are, in any manner,", "start_char_pos": 421, "end_char_pos": 432}, {"type": "R", "before": "show that the optimality of", "after": "prove that in finite two-dimensional terrains, the", "start_char_pos": 494, "end_char_pos": 521}, {"type": "D", "before": "walks in two-dimensions becomes striking when targets are sparse and unpredictable in size, and when detection is weak. Specifically, we prove that under the intermittent model, in which detection is not possible while moving ballistically", "after": null, "start_char_pos": 544, "end_char_pos": 783}, {"type": "D", "before": "B\\'enichou et al. Reviews of Modern Physics, 2011", "after": null, "start_char_pos": 784, "end_char_pos": 833}, {"type": "R", "before": ", this strategy optimally finds", "after": "walk strategy is extremely efficient at finding", "start_char_pos": 851, "end_char_pos": 882}, {"type": "R", "before": "any", "after": "arbitrary", "start_char_pos": 901, "end_char_pos": 904}, {"type": "D", "before": "That is, in a square torus of area~n, and assuming that the detection radius is normalized to 1, the strategy finds any connected set of diameter D in", "after": null, "start_char_pos": 921, "end_char_pos": 1071}, {"type": "R", "before": "(n/D) expected time, whereas \\Omega(n/D) is an unconditional lower bound on the expected time, that holds even when assuming that the shape and size of the target are known. Furthermore, this particular", "after": "Moreover, this holds even under the weak model of intermittent detection. Conversely, any other intermittent", "start_char_pos": 1097, "end_char_pos": 1299}, {"type": "R", "before": "process stands in stark contrast to many other basic intermittent processes, including all other L\\'evy walks, whose competitive ratio is shown to be polynomial in n, for wide ranges of target scales", "after": "walk fails to efficiently find either large targets or small ones", "start_char_pos": 1307, "end_char_pos": 1506}, {"type": "R", "before": "thus provide strong theoretical support for the optimality and robustness of intermittent", "after": "shed new light on the", "start_char_pos": 1521, "end_char_pos": 1610}, {"type": "A", "before": null, "after": "L\\'evy foraging hypothesis", "start_char_pos": 1616, "end_char_pos": 1616}, {"type": "A", "before": null, "after": ", and are thus expected to impact future experiments on animals performing", "start_char_pos": 1617, "end_char_pos": 1617}, {"type": "D", "before": "under general conditions", "after": null, "start_char_pos": 1631, "end_char_pos": 1655}], "sents_char_pos": [0, 103, 312, 485, 663, 801, 920, 1270, 1508]} {"doc_id": "2003.14091", "revision_depth": "1", "before_revision": "An innovative approach which can promote cognitive functions effectively and efficiently is an urgent need for healthy elderly and patients with cognitive impairment . In this study, we proposed a novel functional near-infrared spectroscopy (fNIRS)-based frontoparietal functional connectivity (FC) neurofeedback training paradigm related to working memory . Compared with conventional cognitive training studies, we chose the frontoparietal network, a key brain region for cognitive function modulation as neurofeedback, resulting in strong targeting effect. In the experiment, ten participants received three 20 min cognitive training sessions with fNIRS-based frontoparietal FC as neurofeedback, and the other ten participants served as the normal control (NC) group. Frontoparietal FC was significantly increased in the tested group (p = 0.005) and the cognitive functions (memory and attention) were significantly promoted compared to the NC group . Follow-up evaluations indicated that the training effect can last for over half a month. The proposed method shows great potential to be developed as a fast, effective and widespread training approach for cognitive functions enhancementand rehabilitation applications .", "after_revision": "Neurofeedback cognitive training is a promising tool used to promote cognitive functions effectively and efficiently . In this study, we investigated a novel functional near-infrared spectroscopy (fNIRS)-based frontoparietal functional connectivity (FC) neurofeedback training paradigm related to working memory , involving healthy adults . Compared with conventional cognitive training studies, we chose the frontoparietal network, a key brain region for cognitive function modulation , as neurofeedback, yielding a strong targeting effect. In the experiment, 10 participants (test group) received three cognitive training sessions of 15 min using fNIRS-based frontoparietal FC as neurofeedback, and another 10 participants served as the control group. Frontoparietal FC was significantly increased in the test group (p D 0.03), and the cognitive functions (memory and attention) were significantly promoted compared with the control group (accuracy of 3-back test: p D 0.0005, reaction time of 3-back test: p D 0.0009). After additional validations on long-term training effect and on different patient populations, the proposed method exhibited considerable potential to be developed as a fast, effective , and widespread training approach for cognitive function enhancement .", "edit_actions": [{"type": "R", "before": "An innovative approach which can", "after": "Neurofeedback cognitive training is a promising tool used to", "start_char_pos": 0, "end_char_pos": 32}, {"type": "D", "before": "is an urgent need for healthy elderly and patients with cognitive impairment", "after": null, "start_char_pos": 89, "end_char_pos": 165}, {"type": "R", "before": "proposed", "after": "investigated", "start_char_pos": 186, "end_char_pos": 194}, {"type": "A", "before": null, "after": ", involving healthy adults", "start_char_pos": 357, "end_char_pos": 357}, {"type": "A", "before": null, "after": ",", "start_char_pos": 505, "end_char_pos": 505}, {"type": "R", "before": "resulting in", "after": "yielding a", "start_char_pos": 524, "end_char_pos": 536}, {"type": "R", "before": "ten participants received three 20 min", "after": "10 participants (test group) received three", "start_char_pos": 581, "end_char_pos": 619}, {"type": "R", "before": "with", "after": "of 15 min using", "start_char_pos": 648, "end_char_pos": 652}, {"type": "R", "before": "the other ten", "after": "another 10", "start_char_pos": 705, "end_char_pos": 718}, {"type": "R", "before": "normal control (NC)", "after": "control", "start_char_pos": 746, "end_char_pos": 765}, {"type": "R", "before": "tested", "after": "test", "start_char_pos": 826, "end_char_pos": 832}, {"type": "R", "before": "= 0.005)", "after": "D 0.03),", "start_char_pos": 842, "end_char_pos": 850}, {"type": "R", "before": "to the NC group . Follow-up evaluations indicated that the training effect can last for over half a month. The proposed method shows great", "after": "with the control group (accuracy of 3-back test: p D 0.0005, reaction time of 3-back test: p D 0.0009). After additional validations on long-term training effect and on different patient populations, the proposed method exhibited considerable", "start_char_pos": 939, "end_char_pos": 1077}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1125, "end_char_pos": 1125}, {"type": "R", "before": "functions enhancementand rehabilitation applications", "after": "function enhancement", "start_char_pos": 1173, "end_char_pos": 1225}], "sents_char_pos": [0, 167, 561, 772, 956, 1045]} {"doc_id": "2004.00908", "revision_depth": "1", "before_revision": "Emerging infectious diseases are crucial threats to human health and global stability. The recent outbreak of the novel coronavirus COVID-19 has rapidly formed a global pandemic, causing hundreds of thousands of infections and huge economic loss. The WHO declares that more precise measures to track, detect and isolate infected people are among the most effective means to quickly contain the outbreak. Based on trajectory big data and the theory of mean-field , we establish an aggregated risk mean = field that contains information of all risk-spreading particles by proposing a spatio-temporal model named HiRES risk map. It has dynamic fine spatial resolution and high computation efficiency enabling fast update. We then propose an objective personal epidemic risk scoring model named HiRES-p based on HiRES risk maps, and use it to develop statistical inference based method and machine learning based method for detecting suspected epidemic-infected individuals. We conduct numerical experiments by applying the proposed methods to study the early outbreak of COVID-19 in China. Results show that the HiRES risk map has strong ability in capturing global trend and local variability of the epidemic risk, thus can be applied to monitor epidemic risk at country, province, city and community levels, as well as at specific high-risk locations such as at hospital and station. HiRES-p score is an effective measurement of personal epidemic risk. The detection rates as de ned by successful classification are above 90 \\% as long as the population infection rate is under 20\\%, which indicates great application potential in epidemic risk prevention and control practice.%DIFDELCMD < \\end{abstract} %DIFDELCMD < %%% \\\\%DIF > when the population infection rate is under 20\\\\%, which indicates great application potential in epidemic risk prevention and control practice.", "after_revision": "Emerging infectious diseases are existential threats to human health and global stability. The recent outbreaks of the novel coronavirus COVID-19 have rapidly formed a global pandemic, causing hundreds of thousands of infections and huge economic loss. The WHO declares that more precise measures to track, detect and isolate infected people are among the most effective means to quickly contain the outbreak. Based on trajectory provided by the big data and the mean field theory , we establish an aggregated risk mean field that contains information of all risk-spreading particles by proposing a spatio-temporal model named HiRES risk map. It has dynamic fine spatial resolution and high computation efficiency enabling fast update. We then propose an objective individual epidemic risk scoring model named HiRES-p based on HiRES risk maps, and use it to develop statistical inference and machine learning methods for detecting suspected epidemic-infected individuals. We conduct numerical experiments by applying the proposed methods to study the early outbreak of COVID-19 in China. Results show that the HiRES risk map has strong ability in capturing global trend and local variability of the epidemic risk, thus can be applied to monitor epidemic risk at country, province, city and community levels, as well as at specific high-risk locations such as hospital and station. HiRES-p score seems to be an effective measurement of personal epidemic risk. The accuracy of both detecting methods are above 90 %DIFDELCMD < \\end{abstract} %DIFDELCMD < %%% \\\\%DIF > when the population infection rate is under 20\\\\%, which indicates great application potential in epidemic risk prevention and control practice.", "edit_actions": [{"type": "R", "before": "crucial", "after": "existential", "start_char_pos": 33, "end_char_pos": 40}, {"type": "R", "before": "outbreak", "after": "outbreaks", "start_char_pos": 98, "end_char_pos": 106}, {"type": "R", "before": "has", "after": "have", "start_char_pos": 141, "end_char_pos": 144}, {"type": "A", "before": null, "after": "provided by the", "start_char_pos": 424, "end_char_pos": 424}, {"type": "R", "before": "theory of mean-field", "after": "mean field theory", "start_char_pos": 442, "end_char_pos": 462}, {"type": "D", "before": "=", "after": null, "start_char_pos": 502, "end_char_pos": 503}, {"type": "R", "before": "personal", "after": "individual", "start_char_pos": 749, "end_char_pos": 757}, {"type": "D", "before": "based method", "after": null, "start_char_pos": 870, "end_char_pos": 882}, {"type": "R", "before": "based method", "after": "methods", "start_char_pos": 904, "end_char_pos": 916}, {"type": "D", "before": "at", "after": null, "start_char_pos": 1359, "end_char_pos": 1361}, {"type": "R", "before": "is", "after": "seems to be", "start_char_pos": 1398, "end_char_pos": 1400}, {"type": "R", "before": "detection rates as de ned by successful classification", "after": "accuracy of both detecting methods", "start_char_pos": 1457, "end_char_pos": 1511}, {"type": "D", "before": "\\% as long as the population infection rate is under 20\\%, which indicates great application potential in epidemic risk prevention and control practice.", "after": null, "start_char_pos": 1525, "end_char_pos": 1677}], "sents_char_pos": [0, 86, 246, 403, 626, 719, 971, 1087, 1383, 1452, 1677]} {"doc_id": "2004.02340", "revision_depth": "1", "before_revision": "Recent reports from industry show that social recommender systems consistently fail in practice. According to the negative findings, the failure is attributed to: (1) a majority of users only have a very limited number of neighbors in social networks and can hardly benefit from relations; (2) social relations are noisy but they are often indiscriminately used; (3) social relations are assumed to be universally applicable to multiple scenarios while they are actually multi-faceted and show heterogeneous strengths in different scenarios. Most existing social recommendation models only consider the homophily in social networks and neglect these drawbacks. In this paper we propose a deep adversarial framework based on graph convolutional networks (GCN) to address these problems. Concretely, for the relation sparsity and noises problems , a GCN-based autoencoder is developed to augment the relation data by encoding high-order and complex connectivity patterns, and meanwhile is optimized subject to the constraint of reconstructing the original social profile to guarantee the validity of new identified neighborhood. After obtaining enough purified social relations for each user, a GCN-based attentive social recommendation module is designed to capture the heterogeneous strengths of social relations. These designs deal with the three problems faced by social recommender systems respectively. Finally, we adopt adversarial training to unify and intensify all components by playing a minimax game and ensure a coordinated effort to enhance social recommendation . Experimental results on multiple open datasets demonstrate the superiority of our framework and the ablation study confirms the importance and effectiveness of each component.", "after_revision": "Social recommender systems are expected to improve recommendation quality by incorporating social information when there is little user-item interaction data. However, recent reports from industry show that social recommender systems consistently fail in practice. According to the negative findings, the failure is attributed to: (1) A majority of users only have a very limited number of neighbors in social networks and can hardly benefit from social relations; (2) Social relations are noisy but they are indiscriminately used; (3) Social relations are assumed to be universally applicable to multiple scenarios while they are actually multi-faceted and show heterogeneous strengths in different scenarios. Most existing social recommendation models only consider the homophily in social networks and neglect these drawbacks. In this paper we propose a deep adversarial framework based on graph convolutional networks (GCN) to address these problems. Concretely, for (1) and (2) , a GCN-based autoencoder is developed to augment the relation data by encoding high-order and complex connectivity patterns, and meanwhile is optimized subject to the constraint of reconstructing the social profile to guarantee the validity of the identified neighborhood. After obtaining enough purified social relations for each user, a GCN-based attentive social recommendation module is designed to address (3) by capturing the heterogeneous strengths of social relations. Finally, we adopt adversarial training to unify all the components by playing a Minimax game and ensure a coordinated effort to enhance recommendation performance. Extensive experiments on multiple open datasets demonstrate the superiority of our framework and the ablation study confirms the importance and effectiveness of each component.", "edit_actions": [{"type": "R", "before": "Recent", "after": "Social recommender systems are expected to improve recommendation quality by incorporating social information when there is little user-item interaction data. However, recent", "start_char_pos": 0, "end_char_pos": 6}, {"type": "R", "before": "a", "after": "A", "start_char_pos": 167, "end_char_pos": 168}, {"type": "A", "before": null, "after": "social", "start_char_pos": 279, "end_char_pos": 279}, {"type": "R", "before": "social", "after": "Social", "start_char_pos": 295, "end_char_pos": 301}, {"type": "D", "before": "often", "after": null, "start_char_pos": 335, "end_char_pos": 340}, {"type": "R", "before": "social", "after": "Social", "start_char_pos": 368, "end_char_pos": 374}, {"type": "R", "before": "the relation sparsity and noises problems", "after": "(1) and (2)", "start_char_pos": 803, "end_char_pos": 844}, {"type": "D", "before": "original", "after": null, "start_char_pos": 1046, "end_char_pos": 1054}, {"type": "R", "before": "new", "after": "the", "start_char_pos": 1099, "end_char_pos": 1102}, {"type": "R", "before": "capture", "after": "address (3) by capturing", "start_char_pos": 1258, "end_char_pos": 1265}, {"type": "D", "before": "These designs deal with the three problems faced by social recommender systems respectively.", "after": null, "start_char_pos": 1315, "end_char_pos": 1407}, {"type": "R", "before": "and intensify all", "after": "all the", "start_char_pos": 1456, "end_char_pos": 1473}, {"type": "R", "before": "minimax", "after": "Minimax", "start_char_pos": 1498, "end_char_pos": 1505}, {"type": "R", "before": "social recommendation . Experimental results", "after": "recommendation performance. Extensive experiments", "start_char_pos": 1554, "end_char_pos": 1598}], "sents_char_pos": [0, 96, 290, 363, 542, 661, 786, 1127, 1314, 1407]} {"doc_id": "2004.02739", "revision_depth": "1", "before_revision": "We present a novel methodology in order to perform the epidemic risk assessment in terms of different factors which are useful to understand the different impact of an epidemic in different areas of a country. In particular we discuss the case of COVID-19 outbreak in Italy. We characterize each region of Italy by considering the available data on air pollution, mobility, winter temperature, housing concentration, health care density, total and aged population . We find that the epidemic risk is higher in some of the northern regions of Italy with respect to central and southern part . Our epidemic risk index shows strong correlations with the available official data of the COVID-19 outbreak in Italy and explain in particular why regions like Lombardia, Emilia-Romagna, Piemonte and Veneto are suffering much more than the rest of the country in terms of infected cases, intensive care units and deceased patients . Although the COVID-19 outbreak started almost in the same period, at the beginning of 2020, in both north (Lombardia and Veneto) and central part of Italy (Lazio) , when the first infected were officially certified , the outbreak has been more diffuse and lethal only in those regions with higher epidemic risk. Due to the fact that a large part of infected people do not show evident symptoms of the disease, we claim that in the regions with a lower epidemic risk, i.e. the central and south part of Italy, the epidemic , although probably already very diffused, should not be as lethal as in the northern part of Italy due to a different a-priori risk exposure . We also discuss some policy implications directly connected with our methodology, which results to be very flexible and adaptable to different set of epidemic historical data and to different countries.", "after_revision": "We propose a novel data-driven framework for assessing the a-priori epidemic risk of a geographical area, and for identifying high-risk areas within a country. Our risk index is evaluated as a function of three different components: the hazard of the disease, the exposure of the area and its vulnerability. As an application, we discuss the case of COVID-19 outbreak in Italy. We characterize each of the twenty Italian regions by using available data on air pollution, mobility, winter temperature, housing concentration, health care density, population size and age . We find that the epidemic risk is higher in some of the Northern regions with respect to Central and Southern Italy . Our epidemic risk index shows strong correlations with the available official data on the number of infected individuals, patients in intensive care and deceased patients, and can explain why regions such as Lombardia, Emilia-Romagna, Piemonte and Veneto are suffering much more than the rest of the country . Although the COVID-19 outbreak started in both North (Lombardia and Veneto) and Central Italy (Lazio) almost at the same time , when the first infected were officially certified at the beginning of 2020, the disease has spread faster and with heavier consequences in regions with higher epidemic risk. Our framework can be extended and tested on other epidemic data, such as those on seasonal flu . We also discuss some policy implications directly connected with our methodology, which results to be very flexible and can be adopted for risk assessment in other countries.", "edit_actions": [{"type": "R", "before": "present a novel methodology in order to perform the epidemic risk assessment in terms of different factors which are useful to understand the different impact of an epidemic in different areas of a country. In particular", "after": "propose a novel data-driven framework for assessing the a-priori epidemic risk of a geographical area, and for identifying high-risk areas within a country. Our risk index is evaluated as a function of three different components: the hazard of the disease, the exposure of the area and its vulnerability. As an application,", "start_char_pos": 3, "end_char_pos": 223}, {"type": "R", "before": "region of Italy by considering the", "after": "of the twenty Italian regions by using", "start_char_pos": 296, "end_char_pos": 330}, {"type": "R", "before": "total and aged population", "after": "population size and age", "start_char_pos": 438, "end_char_pos": 463}, {"type": "R", "before": "northern regions of Italy", "after": "Northern regions", "start_char_pos": 522, "end_char_pos": 547}, {"type": "R", "before": "central and southern part", "after": "Central and Southern Italy", "start_char_pos": 564, "end_char_pos": 589}, {"type": "R", "before": "of the COVID-19 outbreak in Italy and explain in particular why regions like", "after": "on the number of infected individuals, patients in intensive care and deceased patients, and can explain why regions such as", "start_char_pos": 675, "end_char_pos": 751}, {"type": "D", "before": "in terms of infected cases, intensive care units and deceased patients", "after": null, "start_char_pos": 852, "end_char_pos": 922}, {"type": "R", "before": "almost in the same period, at the beginning of 2020, in both north", "after": "in both North", "start_char_pos": 964, "end_char_pos": 1030}, {"type": "R", "before": "central part of", "after": "Central", "start_char_pos": 1058, "end_char_pos": 1073}, {"type": "A", "before": null, "after": "almost at the same time", "start_char_pos": 1088, "end_char_pos": 1088}, {"type": "R", "before": ", the outbreak has been more diffuse and lethal only in those", "after": "at the beginning of 2020, the disease has spread faster and with heavier consequences in", "start_char_pos": 1141, "end_char_pos": 1202}, {"type": "R", "before": "Due to the fact that a large part of infected people do not show evident symptoms of the disease, we claim that in the regions with a lower epidemic risk, i.e. the central and south part of Italy, the epidemic , although probably already very diffused, should not be as lethal as in the northern part of Italy due to a different a-priori risk exposure", "after": "Our framework can be extended and tested on other epidemic data, such as those on seasonal flu", "start_char_pos": 1238, "end_char_pos": 1589}, {"type": "R", "before": "adaptable to different set of epidemic historical data and to different", "after": "can be adopted for risk assessment in other", "start_char_pos": 1712, "end_char_pos": 1783}], "sents_char_pos": [0, 209, 274, 465, 591, 924, 1237, 1591]} {"doc_id": "2004.03384", "revision_depth": "1", "before_revision": "One major bottleneck in the ongoing Covid-19 pandemic is the limited number of critical care beds. Due to the dynamic development of infections and the time lag between when patients are infected and when a proportion of them enters an intensive care unit (ICU), the need for future intensive care can easily be underestimated. To derive future ICU load from reported infections, we suggest a simple statistical model that (1) accounts for time lags and (2) allows for making predictions depending on different future growth rates. We evaluate our model for public data from Berlin, Germany, by first estimating the model parameters (i.e., time lag and average stay in ICU)for March 2020 and then using an exponential model to predict the future ICU load for April and May 2020. Assuming an ICU rate of 5 \\%, a time lag of 5 days and an average stay of 14 days in ICU provide the best fit of the data and is in accord with independent estimates. Our model is then used to predict future ICU load assuming a continued exponential phase with varying growth rates (0-15\\%) . For example, based on our parameters the model predicts that the number of ICU patients at the end of May would be 246 if the exponential growth were to continue at a rate of 3\\%, 1,056 if the growth rate were 5\\% and 3,758 if the growth rate were 7\\%. The model can be adjusted as estimates of parameters develop and can thus also help to predict a potential exceedance of ICU capacity. Although our predictions are based on a small data set, disregard non-stationary dynamics, and have a number of assumptions, especially an exponential development of cases, our model is simple, robust, adaptable and can be up-dated when further data become available .", "after_revision": "One major bottleneck in the ongoing COVID-19 pandemic is the limited number of critical care beds. Due to the dynamic development of infections and the time lag between when patients are infected and when a proportion of them enters an intensive care unit (ICU), the need for future intensive care can easily be underestimated. To infer future ICU load from reported infections, we suggest a simple statistical model that (1) accounts for time lags and (2) allows for making predictions depending on different future growth of infections. We have evaluated our model for three regions, namely Berlin (Germany), Lombardy (Italy), and Madrid (Spain). Before extensive containment measures made an impact, we first estimate the region-specific model parameters. Whereas for Berlin, an ICU rate of 6 \\%, a time lag of 6 days, and an average stay of 12 days in ICU provide the best fit of the data , for Lombardy and Madrid the ICU rate was higher (18\\% and 15\\%) and the time lag (0 and 3 days) and the average stay (4 and 8 days) in ICU shorter. The region-specific models are then used to predict future ICU load assuming either a continued exponential phase with varying growth rates (0-15\\%) or linear growth. Thus, the model can help to predict a potential exceedance of ICU capacity. Although our predictions are based on small data sets and disregard non-stationary dynamics, our model is simple, robust, and can be used in early phases of the disease when data are scarce .", "edit_actions": [{"type": "R", "before": "Covid-19", "after": "COVID-19", "start_char_pos": 36, "end_char_pos": 44}, {"type": "R", "before": "derive", "after": "infer", "start_char_pos": 331, "end_char_pos": 337}, {"type": "R", "before": "rates. We evaluate", "after": "of infections. We have evaluated", "start_char_pos": 525, "end_char_pos": 543}, {"type": "R", "before": "public data from Berlin, Germany, by first estimating the model parameters (i.e., time lag and average stay in ICU)for March 2020 and then using an exponential model to predict the future ICU load for April and May 2020. Assuming", "after": "three regions, namely Berlin (Germany), Lombardy (Italy), and Madrid (Spain). Before extensive containment measures made an impact, we first estimate the region-specific model parameters. Whereas for Berlin,", "start_char_pos": 558, "end_char_pos": 787}, {"type": "R", "before": "5", "after": "6", "start_char_pos": 803, "end_char_pos": 804}, {"type": "R", "before": "5 days", "after": "6 days,", "start_char_pos": 823, "end_char_pos": 829}, {"type": "R", "before": "14", "after": "12", "start_char_pos": 853, "end_char_pos": 855}, {"type": "A", "before": null, "after": ", for Lombardy and Madrid the ICU rate was higher (18\\% and 15\\%) and the time lag (0", "start_char_pos": 901, "end_char_pos": 901}, {"type": "R", "before": "is in accord with independent estimates. Our model is", "after": "3 days) and the average stay (4 and 8 days) in ICU shorter. The region-specific models are", "start_char_pos": 906, "end_char_pos": 959}, {"type": "A", "before": null, "after": "either", "start_char_pos": 1006, "end_char_pos": 1006}, {"type": "R", "before": ". For example, based on our parameters the model predicts that the number of ICU patients at the end of May would be 246 if the exponential growth were to continue at a rate of 3\\%, 1,056 if the growth rate were 5\\% and 3,758 if the growth rate were 7\\%. The model can be adjusted as estimates of parameters develop and can thus also", "after": "or linear growth. Thus, the model can", "start_char_pos": 1072, "end_char_pos": 1405}, {"type": "R", "before": "a small data set,", "after": "small data sets and", "start_char_pos": 1500, "end_char_pos": 1517}, {"type": "D", "before": "and have a number of assumptions, especially an exponential development of cases,", "after": null, "start_char_pos": 1553, "end_char_pos": 1634}, {"type": "D", "before": "adaptable", "after": null, "start_char_pos": 1664, "end_char_pos": 1673}, {"type": "R", "before": "up-dated when further data become available", "after": "used in early phases of the disease when data are scarce", "start_char_pos": 1685, "end_char_pos": 1728}], "sents_char_pos": [0, 98, 327, 531, 778, 946, 1073, 1326, 1461]} {"doc_id": "2004.04614", "revision_depth": "1", "before_revision": "We argue that random testing (polling the fraction of infected people in the population) is central to managing the COVID-19 pandemic because it both measures the key variable controlled by restrictive measures, and anticipates the load on the healthcare system via the progression of the disease. Knowledge of random testing outcomes will therefore (i) significantly improve the predictability of the course of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates (such as new tracing strategies based on the mobile telephone network, wearing face masks, etc. ). Frequent random testing for COVID-19 infections has the essential benefit of providing more reliable and refined data than currently available, in both time and space. This is crucial to accompany and monitor the safe release of restrictive measures. Here we show that independent of the total size of population with frequent interactions among its members, about 15000 tests with randomly selected people per day suffice to obtain valuable data about the current number of infections and their evolution in time . In contrast to testing confined to particular subpopulations such as those displaying symptoms, this will allow close to real-time assessment of the quantitative effect of restrictive measures. With yet higher testing capacity , random testing further allows detection of geographical differences in spreading rates and thus the formulation of optimal strategies for a safe reboot of the economy. Most importantly, with daily random testing in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a reboot without such polling .", "after_revision": "We argue that frequent sampling of the fraction of infected people (either by random testing or by analysis of sewage water), is central to managing the COVID-19 pandemic because it both measures in real time the key variable controlled by restrictive measures, and anticipates the load on the healthcare system due to progression of the disease. Knowledge of random testing outcomes will (i) significantly improve the predictability of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates . Here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of 15000 for the number of randomly tested people per day which will suffice to obtain reliable data about the current fraction of infections and its evolution in time , thus enabling close to real-time assessment of the quantitative effect of restrictive measures. Still higher testing capacity permits detection of geographical differences in spreading rates . Furthermore and most importantly, with daily sampling in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a relaxation of restrictions with testing focused on symptomatic individuals. This is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data .", "edit_actions": [{"type": "R", "before": "random testing (polling", "after": "frequent sampling of", "start_char_pos": 14, "end_char_pos": 37}, {"type": "R", "before": "in the population)", "after": "(either by random testing or by analysis of sewage water),", "start_char_pos": 70, "end_char_pos": 88}, {"type": "A", "before": null, "after": "in real time", "start_char_pos": 159, "end_char_pos": 159}, {"type": "R", "before": "via the", "after": "due to", "start_char_pos": 263, "end_char_pos": 270}, {"type": "D", "before": "therefore", "after": null, "start_char_pos": 341, "end_char_pos": 350}, {"type": "D", "before": "course of the", "after": null, "start_char_pos": 403, "end_char_pos": 416}, {"type": "R", "before": "(such as new tracing strategies based on the mobile telephone network, wearing face masks, etc. ). Frequent random testing for COVID-19 infections has the essential benefit of providing more reliable and refined data than currently available, in both time and space. This is crucial to accompany and monitor the safe release of restrictive measures. Here we show that independent of the total size of population with frequent interactions among its members, about", "after": ". Here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of", "start_char_pos": 665, "end_char_pos": 1128}, {"type": "R", "before": "tests with randomly selected", "after": "for the number of randomly tested", "start_char_pos": 1135, "end_char_pos": 1163}, {"type": "A", "before": null, "after": "which will", "start_char_pos": 1179, "end_char_pos": 1179}, {"type": "R", "before": "valuable", "after": "reliable", "start_char_pos": 1198, "end_char_pos": 1206}, {"type": "R", "before": "number", "after": "fraction", "start_char_pos": 1230, "end_char_pos": 1236}, {"type": "R", "before": "their", "after": "its", "start_char_pos": 1255, "end_char_pos": 1260}, {"type": "R", "before": ". In contrast to testing confined to particular subpopulations such as those displaying symptoms, this will allow", "after": ", thus enabling", "start_char_pos": 1279, "end_char_pos": 1392}, {"type": "R", "before": "With yet", "after": "Still", "start_char_pos": 1475, "end_char_pos": 1483}, {"type": "R", "before": ", random testing further allows", "after": "permits", "start_char_pos": 1508, "end_char_pos": 1539}, {"type": "R", "before": "and thus the formulation of optimal strategies for a safe reboot of the economy. Most", "after": ". Furthermore and most", "start_char_pos": 1597, "end_char_pos": 1682}, {"type": "R", "before": "random testing", "after": "sampling", "start_char_pos": 1707, "end_char_pos": 1721}, {"type": "R", "before": "reboot without such polling", "after": "relaxation of restrictions with testing focused on symptomatic individuals. This is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data", "start_char_pos": 1866, "end_char_pos": 1893}], "sents_char_pos": [0, 298, 763, 931, 1014, 1280, 1474, 1677]} {"doc_id": "2004.04903", "revision_depth": "1", "before_revision": "In stable environments, statistics of cell size fluctuations is thought to be governed by simple physical principles . Past studies suggested that bacterial cell sizes exhibit a universal distribution irrespective of growth conditions, and for eukaryotes, a single distribution may describe size fluctuations of various cell species. The distinguished feature of those distributions is scale invariance; i.e., the distribution function is determined solely by the mean cell size. Here we show, using E. coli, that such a simple distribution law also persists under time-dependent environments, which then involve regulations of cell cycle kinetics. By developing a membrane-based microfluidic device suitable for culturing a large cell population under uniform and controllable growth conditions, we study how the cell size distribution changes after the supplied medium is switched from a nutritious to non-nutritious one, triggering the bacterial reductive division . The mean cell size then gradually decreases, but we find that the size distribution is kept unchanged if the cell sizes are normalized by their time-dependent mean value ; in other words the scale invariance holds as it is. We also study a model considering intracellular replication and cell volume growth and successfully reproduce our experimental results. Furthermore, we give a theoretical expression for the time-dependent cell size distribution and propose a sufficient condition for the scale invariance. Our findings emphasize that, compared with environmental factors, the intrinsic cellular replication processes have stronger impact on the cell size distribution, and consequently bacteria and eukaryotes are ruled by different, yet possibly universal distributions .", "after_revision": "In stable environments, cell size fluctuations are thought to be governed by simple physical principles , as suggested by recent finding of scaling properties. Here we show, using E. coli, that the scaling concept also rules cell size fluctuations under time-dependent conditions, even though the distribution changes with time. We develop a microfluidic device for observing dense and large bacterial populations, under uniform and switchable conditions. Triggering bacterial reductive division by switching to non-nutritious medium, we find evidence that the cell size distribution changes in a specific manner that keeps its normalized form unchanged ; in other words , scale invariance holds . This finding is underpinned by simulations of a model based on cell growth and intracellular replication. We also formulate the problem theoretically and propose a sufficient condition for the scale invariance. Our results emphasize the importance of intrinsic cellular replication processes in this problem, suggesting different distribution trends for bacteria and eukaryotes .", "edit_actions": [{"type": "D", "before": "statistics of", "after": null, "start_char_pos": 24, "end_char_pos": 37}, {"type": "R", "before": "is", "after": "are", "start_char_pos": 61, "end_char_pos": 63}, {"type": "R", "before": ". Past studies suggested that bacterial cell sizes exhibit a universal distribution irrespective of growth conditions, and for eukaryotes, a single distribution may describe size fluctuations of various cell species. The distinguished feature of those distributions is scale invariance; i.e., the distribution function is determined solely by the mean cell size.", "after": ", as suggested by recent finding of scaling properties.", "start_char_pos": 117, "end_char_pos": 479}, {"type": "R", "before": "such a simple distribution law also persists", "after": "the scaling concept also rules cell size fluctuations", "start_char_pos": 514, "end_char_pos": 558}, {"type": "R", "before": "environments, which then involve regulations of cell cycle kinetics. By developing a membrane-based microfluidic device suitable for culturing a large cell population", "after": "conditions, even though the distribution changes with time. We develop a microfluidic device for observing dense and large bacterial populations,", "start_char_pos": 580, "end_char_pos": 746}, {"type": "R", "before": "controllable growth conditions, we study how the cell size distribution changes after the supplied medium is switched from a nutritious to non-nutritious one, triggering the", "after": "switchable conditions. Triggering", "start_char_pos": 765, "end_char_pos": 938}, {"type": "R", "before": ". The mean cell size then gradually decreases, but we find that the size distribution is kept unchanged if the cell sizes are normalized by their time-dependent mean value", "after": "by switching to non-nutritious medium, we find evidence that the cell size distribution changes in a specific manner that keeps its normalized form unchanged", "start_char_pos": 968, "end_char_pos": 1139}, {"type": "R", "before": "the", "after": ",", "start_char_pos": 1157, "end_char_pos": 1160}, {"type": "R", "before": "as it is. We also study a model considering intracellular replication and cell volume growth and successfully reproduce our experimental results. Furthermore, we give a theoretical expression for the time-dependent cell size distribution", "after": ". This finding is underpinned by simulations of a model based on cell growth", "start_char_pos": 1184, "end_char_pos": 1421}, {"type": "A", "before": null, "after": "intracellular replication. We also formulate the problem theoretically and", "start_char_pos": 1426, "end_char_pos": 1426}, {"type": "R", "before": "findings emphasize that, compared with environmental factors, the", "after": "results emphasize the importance of", "start_char_pos": 1488, "end_char_pos": 1553}, {"type": "R", "before": "have stronger impact on the cell size distribution, and consequently", "after": "in this problem, suggesting different distribution trends for", "start_char_pos": 1595, "end_char_pos": 1663}, {"type": "D", "before": "are ruled by different, yet possibly universal distributions", "after": null, "start_char_pos": 1688, "end_char_pos": 1748}], "sents_char_pos": [0, 333, 403, 479, 648, 969, 1141, 1193, 1329, 1483]} {"doc_id": "2004.05478", "revision_depth": "1", "before_revision": "TikTok is a video-sharing social networking service, whose popularity is increasing rapidly. It was the world's second-most downloaded app in 2019. Although the platform is known for having users dancing, lip-syncing and showing off their talents, there is an increase in videos designed to express political opinions. In this study , we perform the first evaluation of political communication on this platform . We collect a set of US Republican and Democratic partisan videos and investigate how users communicate with each other . With the help of computer vision, natural language processing, and statistical tools, we illustrate that political communication is much more interactive on TikTok in contrast to other social media platforms, with users combining multiple information channels to spread their messages. We show that political communication takes place in the form of communication trees since users generate branches of responses on existing content. Finally, we investigate user demographicsand their interactions with opposing views. We find that partisan users from both parties are young and behave similarly on the platform. We also find that Republican users generate more political content , and their videos receive more reactions. However, Democratic partisans engage significantly more in cross-partisan discussions.", "after_revision": "TikTok is a video-sharing social networking service, whose popularity is increasing rapidly. It was the world's second-most downloaded app in 2019. Although the platform is known for having users posting videos of themselves dancing, lip-syncing , or showcasing other talents, user-videos expressing political views have seen a recent spurt. This study aims to perform a primary evaluation of political communication on TikTok . We collect a set of US partisan Republican and Democratic videos to investigate how users communicated with each other about political issues . With the help of computer vision, natural language processing, and statistical tools, we illustrate that political communication on TikTok is much more interactive in comparison to other social media platforms, with users combining multiple information channels to spread their messages. We show that political communication takes place in the form of communication trees since users generate branches of responses to existing content. In terms of user demographics, we find that users belonging to both the US parties are young and behave similarly on the platform. However, Republican users generated more political content and their videos received more responses; on the other hand, Democratic users engaged significantly more in cross-partisan discussions.", "edit_actions": [{"type": "A", "before": null, "after": "posting videos of themselves", "start_char_pos": 196, "end_char_pos": 196}, {"type": "R", "before": "and showing off their talents, there is an increase in videos designed to express political opinions. In this study , we perform the first", "after": ", or showcasing other talents, user-videos expressing political views have seen a recent spurt. This study aims to perform a primary", "start_char_pos": 218, "end_char_pos": 356}, {"type": "R", "before": "this platform", "after": "TikTok", "start_char_pos": 398, "end_char_pos": 411}, {"type": "A", "before": null, "after": "partisan", "start_char_pos": 437, "end_char_pos": 437}, {"type": "R", "before": "partisan videos and", "after": "videos to", "start_char_pos": 464, "end_char_pos": 483}, {"type": "R", "before": "communicate", "after": "communicated", "start_char_pos": 506, "end_char_pos": 517}, {"type": "A", "before": null, "after": "about political issues", "start_char_pos": 534, "end_char_pos": 534}, {"type": "A", "before": null, "after": "on TikTok", "start_char_pos": 666, "end_char_pos": 666}, {"type": "R", "before": "on TikTok in contrast", "after": "in comparison", "start_char_pos": 692, "end_char_pos": 713}, {"type": "R", "before": "on", "after": "to", "start_char_pos": 951, "end_char_pos": 953}, {"type": "R", "before": "Finally, we investigate user demographicsand their interactions with opposing views. We find that partisan users from both", "after": "In terms of user demographics, we find that users belonging to both the US", "start_char_pos": 972, "end_char_pos": 1094}, {"type": "R", "before": "We also find that Republican users generate", "after": "However, Republican users generated", "start_char_pos": 1151, "end_char_pos": 1194}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1218, "end_char_pos": 1219}, {"type": "R", "before": "receive more reactions. However, Democratic partisans engage", "after": "received more responses; on the other hand, Democratic users engaged", "start_char_pos": 1237, "end_char_pos": 1297}], "sents_char_pos": [0, 92, 147, 319, 413, 536, 823, 971, 1056, 1150, 1260]} {"doc_id": "2004.06033", "revision_depth": "1", "before_revision": "An increasing number of countries expands testing of symptomatic persons for infection with SARS-CoV-2 . It is important to consider efficient ways to collect information that will allow to understand the COVID-19 pandemic . We propose two types of case-control studies that can be carried out in test-settings for symptomatic persons. The first, the test-negative case-control design (TND) is the easiest to set up, since all symptomatic persons will be tested ; it only demands collecting information from these persons, some which will be collected routinely . The second, standard matched case-control studies (CC) demands that one person who accompanies the symptomatic persons to the test facility is asked the same information . We first summarize the TND and explain how to add it to large-scale testing of persons with signs and symptoms . The TND shows differences in risk factors between symptomatic persons with COVID-19 and persons with other respiratory infections . Factors that are risk factors of equal magnitude for both COVID-19 and other respiratory infections will not be identified by the TND. Second, we discuss how to add standard matched case-control studies (separately for SARS-CoV-2 test-positives and test-negatives ) by asking accompanying persons to the test facilities to become controls. Those case-control studies give a contrast between COVID-19 patients and matched persons from the general population. We give suggestions for other types of population controls . Such studies distinguish between exposures that are risk factors for both COVID-19 and other respiratory infections, and exposures that are risk factors for just COVID-19 or just for other respiratory infections. Incorporating the test-negative design into on-going testing efforts is useful in itself, but it would be more useful with the addition of the matched case-control designs .", "after_revision": "Testing of symptomatic persons for infection with SARS-CoV-2 is increasingly used in different countries. To collect information to understand the causes of the COVID-19 pandemic , we propose two types of case-control studies that can be carried out jointly in test-settings for symptomatic persons. The first, the test-negative case-control design (TND) is the easiest to implement ; it only demands collecting information about potential risk factors for COVID-19 from the tested symptomatic persons . The second, standard case-control studies (CC) with population controls, requires the collection of data on one or more population controls for each person who is tested in the test facilities, so that test-positives and test-negatives can each be compared with population controls . We first summarize the TND and explain how to add it to large-scale testing of symptomatic persons . The TND will detect differences in risk factors between symptomatic persons who have COVID-19 (test-positives) and those who have other respiratory infections (test-negatives). However, risk factors with effect sizes of equal magnitude for both COVID-19 and other respiratory infections will not be identified by the TND. Second, we discuss how to add population controls to compare with the test-positives and the test-negatives separately, yielding two additional case-control studies. We describe two different types of population control groups: one composed of accompanying persons to the test facilities , the other drawn from existing country-wide health care databases. We also describe other types of population controls that may be most suitable in other situations. Combining the test-negative design with population controls yields a triangulation approach that distinguishes between exposures that are risk factors for both COVID-19 and other respiratory infections, and exposures that are risk factors for just COVID-19 .", "edit_actions": [{"type": "R", "before": "An increasing number of countries expands testing of", "after": "Testing of", "start_char_pos": 0, "end_char_pos": 52}, {"type": "R", "before": ". It is important to consider efficient ways to collect information that will allow", "after": "is increasingly used in different countries. To collect information", "start_char_pos": 103, "end_char_pos": 186}, {"type": "A", "before": null, "after": "causes of the", "start_char_pos": 205, "end_char_pos": 205}, {"type": "R", "before": ". We", "after": ", we", "start_char_pos": 224, "end_char_pos": 228}, {"type": "A", "before": null, "after": "jointly", "start_char_pos": 295, "end_char_pos": 295}, {"type": "R", "before": "set up, since all symptomatic persons will be tested", "after": "implement", "start_char_pos": 411, "end_char_pos": 463}, {"type": "R", "before": "from these persons, some which will be collected routinely", "after": "about potential risk factors for COVID-19 from the tested symptomatic persons", "start_char_pos": 505, "end_char_pos": 563}, {"type": "D", "before": "matched", "after": null, "start_char_pos": 587, "end_char_pos": 594}, {"type": "R", "before": "demands that one person who accompanies the symptomatic persons to the test facility is asked the same information", "after": "with population controls, requires the collection of data on one or more population controls for each person who is tested in the test facilities, so that test-positives and test-negatives can each be compared with population controls", "start_char_pos": 621, "end_char_pos": 735}, {"type": "R", "before": "persons with signs and symptoms", "after": "symptomatic persons", "start_char_pos": 817, "end_char_pos": 848}, {"type": "R", "before": "shows", "after": "will detect", "start_char_pos": 859, "end_char_pos": 864}, {"type": "R", "before": "with", "after": "who have", "start_char_pos": 921, "end_char_pos": 925}, {"type": "R", "before": "and persons with", "after": "(test-positives) and those who have", "start_char_pos": 935, "end_char_pos": 951}, {"type": "R", "before": ". Factors that are risk factors", "after": "(test-negatives). However, risk factors with effect sizes", "start_char_pos": 981, "end_char_pos": 1012}, {"type": "R", "before": "standard matched case-control studies (separately for SARS-CoV-2", "after": "population controls to compare with the", "start_char_pos": 1148, "end_char_pos": 1212}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1232, "end_char_pos": 1232}, {"type": "R", "before": ") by asking", "after": "separately, yielding two additional case-control studies. We describe two different types of population control groups: one composed of", "start_char_pos": 1248, "end_char_pos": 1259}, {"type": "R", "before": "to become controls. Those case-control studies give a contrast between COVID-19 patients and matched persons from the general population. We give suggestions for", "after": ", the other drawn from existing country-wide health care databases. We also describe", "start_char_pos": 1304, "end_char_pos": 1465}, {"type": "R", "before": ". Such studies distinguish", "after": "that may be most suitable in other situations. Combining the test-negative design with population controls yields a triangulation approach that distinguishes", "start_char_pos": 1501, "end_char_pos": 1527}, {"type": "D", "before": "or just for other respiratory infections. Incorporating the test-negative design into on-going testing efforts is useful in itself, but it would be more useful with the addition of the matched case-control designs", "after": null, "start_char_pos": 1674, "end_char_pos": 1887}], "sents_char_pos": [0, 225, 337, 465, 565, 737, 850, 982, 1117, 1323, 1441, 1502, 1715]} {"doc_id": "2004.06539", "revision_depth": "2", "before_revision": "Migration of scholars is a major driver of innovation and diffusion of knowledge. Although large-scale bibliometric data have been used to measure international migration of scholars, our understanding of internal migration among researchers is very limited. This is partly due to lack of data aggregated at a suitable sub-national level. In this study, we analyze internal migration in Mexico based on over 1.1 million authorship records from the Scopus database. We trace movements of scholars between Mexican states and provide key demographic measures of internal migration for the period 1996-2018 . From a methodological perspective, we develop a new framework for enhancing data quality, inferring states from affiliations, and detecting moves from modal states for the purposes of studying internal migration between researchers. Substantively, we combine demographic and network science techniques to improve our understanding of internal migration patterns country boundaries. Migration patterns between states in Mexico appear to be heterogeneous in size and direction across regions. However, while many scholars remain in their regions, there seems to be a preference for Mexico City and the surrounding states as a destination. Over the past two decades, we observed a general decreasing trend in the crude migration intensity. However, the migration network has become more dense , and more diverse, including greater exchange between states along the Gulf and the Pacific Coast. Our analysis, which is mostly empirical in nature, sets the foundations for testing and developing theories that can rely on the analytical framework developed by migration scholars and the richness of appropriately processed bibliometric data.", "after_revision": "The migration of scholars is a major driver of innovation and of diffusion of knowledge. Although large-scale bibliometric data have been used to measure international migration of scholars, our understanding of internal migration among researchers is very limited. This is partly due to a lack of data aggregated at a suitable sub-national level. In this study, we analyze internal migration in Mexico based on over 1.1 million authorship records from the Scopus database. We trace the movements of scholars between Mexican states , and provide key demographic measures of internal migration for the 1996-2018 period . From a methodological perspective, we develop a new framework for enhancing data quality, inferring states from affiliations, and detecting moves from modal states for the purposes of studying internal migration among researchers. Substantively, we combine demographic and network science techniques to improve our understanding of internal migration patterns within country boundaries. The migration patterns between states in Mexico appear to be heterogeneous in size and direction across regions. However, while many scholars remain in their regions, there seems to be a preference for Mexico City and the surrounding states as migration destinations. We observed that over the past two decades, there has been a general decreasing trend in the crude migration intensity. However, the migration network has become more dense and more diverse, and has included greater exchanges between states along the Gulf and the Pacific Coast. Our analysis, which is mostly empirical in nature, lays the foundations for testing and developing theories that can rely on the analytical framework developed by migration scholars , and the richness of appropriately processed bibliometric data.", "edit_actions": [{"type": "R", "before": "Migration", "after": "The migration", "start_char_pos": 0, "end_char_pos": 9}, {"type": "A", "before": null, "after": "of", "start_char_pos": 58, "end_char_pos": 58}, {"type": "A", "before": null, "after": "a", "start_char_pos": 282, "end_char_pos": 282}, {"type": "A", "before": null, "after": "the", "start_char_pos": 476, "end_char_pos": 476}, {"type": "A", "before": null, "after": ",", "start_char_pos": 522, "end_char_pos": 522}, {"type": "D", "before": "period", "after": null, "start_char_pos": 590, "end_char_pos": 596}, {"type": "A", "before": null, "after": "period", "start_char_pos": 607, "end_char_pos": 607}, {"type": "R", "before": "between", "after": "among", "start_char_pos": 822, "end_char_pos": 829}, {"type": "A", "before": null, "after": "within", "start_char_pos": 972, "end_char_pos": 972}, {"type": "R", "before": "Migration", "after": "The migration", "start_char_pos": 993, "end_char_pos": 1002}, {"type": "R", "before": "a destination. Over", "after": "migration destinations. We observed that over", "start_char_pos": 1233, "end_char_pos": 1252}, {"type": "R", "before": "we observed", "after": "there has been", "start_char_pos": 1275, "end_char_pos": 1286}, {"type": "D", "before": ",", "after": null, "start_char_pos": 1401, "end_char_pos": 1402}, {"type": "R", "before": "including greater exchange", "after": "and has included greater exchanges", "start_char_pos": 1421, "end_char_pos": 1447}, {"type": "R", "before": "sets", "after": "lays", "start_char_pos": 1552, "end_char_pos": 1556}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1683, "end_char_pos": 1683}], "sents_char_pos": [0, 82, 259, 340, 466, 842, 992, 1101, 1247, 1347, 1500]} {"doc_id": "2004.06774", "revision_depth": "2", "before_revision": "Time-critical analysis of social media streams is important for URLanizations to plan rapid response during disasters. The crisis informatics research community has developed several techniques and systems to process and classify big crisis related data posted on social media. However, due to the dispersed nature of the datasets used in the literature , it is not possible to compare the results and measure the progress made towards better models for crisis informatics . In this work, we attempt to bridge this gap by standardizing various existing crisis-related datasets. We consolidate labels of eight annotated data sources and provide 166.1k and 141.5k tweets for informativeness and humanitarian \\textit{ classification tasks, respectively. The consolidation results in a larger dataset that affords the ability to train more sophisticated models. To that end , we provide baseline results using CNN and BERT models . We make the dataset available at URL", "after_revision": "Time-critical analysis of social media streams is important for URLanizations for planing rapid response during disasters. The crisis informatics research community has developed several techniques and systems for processing and classifying big crisis-related data posted on social media. However, due to the dispersed nature of the datasets used in the literature (e.g., for training models), it is not possible to compare the results and measure the progress made towards building better models for crisis informatics tasks . In this work, we attempt to bridge this gap by combining various existing crisis-related datasets. We consolidate eight human-annotated datasets and provide 166.1k and 141.5k tweets for informativeness and\\textit{humanitarian classification tasks, respectively. We believe that the consolidated dataset will help train more sophisticated models. Moreover , we provide benchmarks for both binary and multiclass classification tasks using several deep learning architecrures including, CNN, fastText, and transformers . We make the dataset and scripts available at: URL", "edit_actions": [{"type": "R", "before": "to plan", "after": "for planing", "start_char_pos": 78, "end_char_pos": 85}, {"type": "R", "before": "crisis informatics", "after": "crisis informatics", "start_char_pos": 123, "end_char_pos": 141}, {"type": "R", "before": "to process and classify big crisis related", "after": "for processing and classifying big crisis-related", "start_char_pos": 206, "end_char_pos": 248}, {"type": "R", "before": ",", "after": "(e.g., for training models),", "start_char_pos": 354, "end_char_pos": 355}, {"type": "A", "before": null, "after": "building", "start_char_pos": 436, "end_char_pos": 436}, {"type": "A", "before": null, "after": "tasks", "start_char_pos": 474, "end_char_pos": 474}, {"type": "R", "before": "standardizing", "after": "combining", "start_char_pos": 524, "end_char_pos": 537}, {"type": "R", "before": "labels of eight annotated data sources", "after": "eight human-annotated datasets", "start_char_pos": 595, "end_char_pos": 633}, {"type": "R", "before": "informativeness and humanitarian", "after": "informativeness", "start_char_pos": 675, "end_char_pos": 707}, {"type": "A", "before": null, "after": "and", "start_char_pos": 708, "end_char_pos": 708}, {"type": "A", "before": null, "after": "humanitarian", "start_char_pos": 716, "end_char_pos": 716}, {"type": "R", "before": "The consolidation results in a larger dataset that affords the ability to", "after": "We believe that the consolidated dataset will help", "start_char_pos": 753, "end_char_pos": 826}, {"type": "R", "before": "To that end", "after": "Moreover", "start_char_pos": 860, "end_char_pos": 871}, {"type": "R", "before": "baseline results using CNN and BERT models", "after": "benchmarks for both binary and multiclass classification tasks using several deep learning architecrures including, CNN, fastText, and transformers", "start_char_pos": 885, "end_char_pos": 927}, {"type": "R", "before": "available at", "after": "and scripts available at:", "start_char_pos": 950, "end_char_pos": 962}], "sents_char_pos": [0, 118, 277, 476, 579, 752, 859, 929]} {"doc_id": "2004.06828", "revision_depth": "1", "before_revision": "The population recovery problem asks one to recover an unknown distribution over n-bit strings given query access to independent noisy samples of strings drawn from the distribution. Recently, Ban et . al. [BCF+19] studied the problem where the unknown distribution over n-bit strings is known to be \\ell-sparse for some fixed \\ell, and the noise is induced through the deletion channel. The deletion channel is a noise model where each bit of the string is independently deleted with some fixed probability, and the retained bits are concatenated. We note that if \\ell = 1, i.e., we are trying to learn a single string , learning the distribution is equivalent to the famous trace reconstruction problem. The best known algorithms for trace reconstruction require \\exp%DIFDELCMD < \\left(%%% O(n^{1/3)}%DIFDELCMD < \\right) %%% samples. For population recovery under the deletion channel , Ban et . al. provided an algorithm that could learn \\ell-sparse distributions over strings using \\exp %DIFDELCMD < \\left(%%% \\big( n^{1/2} \\cdot (\\log n)^{O(\\ell)} %DIFDELCMD < \\right) %%% \\big) samples. In this work, we provide an algorithm that learns the distribution using only \\exp\\big(O(n^{1/3}) \\cdot \\ell^2\\big) samples, by developing a higher-moment analog of the algorithms of [DOS17, NP17] \\big(O)}\\big) . We also give the first algorithm with a runtime subexponential in n, which solves population recovery in \\exp\\big(\\tilde{O}(n^{1/3}) \\cdot \\ell^3\\big) samples and time. Notably, our dependence on n nearly matches the known upper bound when \\ell = 1 , and we reduce the dependence on \\ell from doubly to nearly singly exponential. Therefore, we are able to learn the mixture even for much larger values of \\ell. For instance, Ban et . al.'s algorithm can only learn a mixture of O(\\log n/\\log \\log n) strings with a subexponential number of queries, whereas we are able to learn a mixture of up to n^{o(1)} strings in subexponential queries\\big(}\\big) and time.", "after_revision": "The population recovery problem asks one to recover an unknown distribution over n-bit strings given access to independent noisy samples of strings drawn from the distribution. Recently, Ban et al. [BCF+19] studied the problem where the noise is induced through the deletion channel. This problem generalizes the famous trace reconstruction problem, where one wishes to learn a single string %DIFDELCMD < \\left(%%% )}%DIFDELCMD < \\right) %%% under the deletion channel . Ban et al. showed how to learn \\ell-sparse distributions over strings using \\exp %DIFDELCMD < \\left(%%% \\big( n^{1/2} \\cdot (\\log n)^{O(\\ell)} %DIFDELCMD < \\right) %%% \\big) samples. In this work, we learn the distribution using only \\exp\\big(O(n^{1/3}) \\cdot \\ell^2\\big) samples, by developing a higher-moment analog of the algorithms of [DOS17, NP17] , which solve trace reconstruction in \\exp\\big(O(n^{1/3)}\\big) samples . We also give the first algorithm with a runtime subexponential in n, solving population recovery in \\exp\\big(\\tilde{O}(n^{1/3}) \\cdot \\ell^3\\big) samples and time. Notably, our dependence on n nearly matches the upper bound of DOS17, NP17 when \\ell = O( 1 ) , and we reduce the dependence on \\ell from doubly to singly exponential. Therefore, we are able to learn large mixtures of strings: while Ban et al.'s algorithm can only learn a mixture of O(\\log n/\\log \\log n) strings with a subexponential number of samples, we are able to learn a mixture of n^{o(1)} strings in \\exp\\big(n^{1/3 + o(1)}\\big) samples and time.", "edit_actions": [{"type": "D", "before": "query", "after": null, "start_char_pos": 101, "end_char_pos": 106}, {"type": "D", "before": ".", "after": null, "start_char_pos": 200, "end_char_pos": 201}, {"type": "D", "before": "unknown distribution over n-bit strings is known to be \\ell-sparse for some fixed \\ell, and the", "after": null, "start_char_pos": 245, "end_char_pos": 340}, {"type": "R", "before": "The deletion channel is a noise model where each bit of the string is independently deleted with some fixed probability, and the retained bits are concatenated. We note that if \\ell = 1, i.e., we are trying", "after": "This problem generalizes the famous trace reconstruction problem, where one wishes", "start_char_pos": 388, "end_char_pos": 594}, {"type": "D", "before": ", learning the distribution is equivalent to the famous trace reconstruction problem. The best known algorithms for trace reconstruction require \\exp", "after": null, "start_char_pos": 620, "end_char_pos": 769}, {"type": "D", "before": "O(n^{1/3", "after": null, "start_char_pos": 792, "end_char_pos": 800}, {"type": "D", "before": "samples. For population recovery", "after": null, "start_char_pos": 827, "end_char_pos": 859}, {"type": "R", "before": ", Ban et . al. provided an algorithm that could", "after": ". Ban et al. showed how to", "start_char_pos": 887, "end_char_pos": 934}, {"type": "R", "before": "provide an algorithm that learns", "after": "learn", "start_char_pos": 1110, "end_char_pos": 1142}, {"type": "A", "before": null, "after": ", which solve trace reconstruction in \\exp", "start_char_pos": 1290, "end_char_pos": 1290}, {"type": "A", "before": null, "after": "(n^{1/3", "start_char_pos": 1296, "end_char_pos": 1296}, {"type": "A", "before": null, "after": "samples", "start_char_pos": 1304, "end_char_pos": 1304}, {"type": "R", "before": "which solves", "after": "solving", "start_char_pos": 1376, "end_char_pos": 1388}, {"type": "R", "before": "known upper bound", "after": "upper bound of", "start_char_pos": 1524, "end_char_pos": 1541}, {"type": "A", "before": null, "after": "DOS17, NP17", "start_char_pos": 1542, "end_char_pos": 1542}, {"type": "A", "before": null, "after": "O(", "start_char_pos": 1555, "end_char_pos": 1555}, {"type": "A", "before": null, "after": ")", "start_char_pos": 1558, "end_char_pos": 1558}, {"type": "D", "before": "nearly", "after": null, "start_char_pos": 1613, "end_char_pos": 1619}, {"type": "R", "before": "the mixture even for much larger values of \\ell. For instance, Ban et .", "after": "large mixtures of strings: while Ban et", "start_char_pos": 1672, "end_char_pos": 1743}, {"type": "R", "before": "queries, whereas", "after": "samples,", "start_char_pos": 1850, "end_char_pos": 1866}, {"type": "D", "before": "up to", "after": null, "start_char_pos": 1901, "end_char_pos": 1906}, {"type": "R", "before": "subexponential queries", "after": "\\exp", "start_char_pos": 1927, "end_char_pos": 1949}, {"type": "A", "before": null, "after": "n^{1/3 + o(1)", "start_char_pos": 1954, "end_char_pos": 1954}, {"type": "A", "before": null, "after": "samples", "start_char_pos": 1961, "end_char_pos": 1961}], "sents_char_pos": [0, 182, 387, 548, 705, 835, 1092, 1306, 1475, 1639, 1720]} {"doc_id": "2004.07224", "revision_depth": "3", "before_revision": "The fatality rate of SARS-Cov-2 escalates with age and is larger in men than women. I show that these variations are strongly correlated with the levels of the ACE2 protein in the lungsbut surprisingly, despite ACE2 is the viral receptor, higher levels lead to lower fatality. This behaviour is consistent with a previous mathematical model that predicts that the speed of viral progression in URLanism has a maximum and then declines with the receptor level. SARS-Cov-2 degrades ACE2 and thus worsens lung injury, causes vasoconstriction, thrombotic problems, and exacerbated inflammatory response . I developed a mathematical model based on the influence of ACE2 on viral propagation and on the negative effects of its degradation . The model fits SARS-CoV-2 fatality rate across age and gender with high accuracy (r^2>0.9) . Rescaling the model parameters with the binding rates of the spike proteins of SARS-CoV and SARS-CoV-2 allows predicting the fatality rate of SARS-CoV across age and gender, in particular its higher severity for young patients, thus linking the molecular and epidemiological levels. These results support the suggestion that drugs that enhance the expression of ACE2, such as ACE inhibitors and angiotensin receptor blockers , constitute a promising therapy against the most adverse effects of CoViD-19 . Furthermore, ACE2 is a candidate prognostic factor for detecting population that needs stronger protection.", "after_revision": "The fatality rate of Covid-19 escalates with age and is larger in men than women. I show that these variations correlate strongly with the level of the viral receptor protein ACE2 in rat lungs, which is consistent with the still limited and apparently contradictory data on human ACE2. Surprisingly, lower levels of the receptor correlate with higher fatality. However, a previous mathematical model predicts that the speed of viral progression in URLanism has a maximum and then declines with the receptor level. Moreover, many manifestations of severe CoViD-19, such as severe lung injury, exacerbated inflammatory response and thrombotic problems may derive from increased Angiotensin II (Ang-II) level that results from degradation of ACE2 by the virus. I present here a mathematical model based on the influence of ACE2 on viral propagation and disease severity . The model fits Covid-19 fatality rate across age and sex with high accuracy (r^2>0.9) under the hypothesis that SARS-CoV-2 infections are in the dynamical regimes in which increased receptor slows down viral propagation. Moreover, rescaling the model parameters by the ratio of the binding rates of the spike proteins of SARS-CoV and SARS-CoV-2 allows predicting the fatality rate of SARS-CoV across age and sex, thus linking the molecular and epidemiological levels. The presented model opposes the fear that angiotensin receptor blockers (ARB), suggested as a therapy against the most adverse effects of CoViD-19 , may favour viral propagation, and suggests that Ang-II and ACE2 are candidate prognostic factors for detecting population that needs stronger protection.", "edit_actions": [{"type": "R", "before": "SARS-Cov-2", "after": "Covid-19", "start_char_pos": 21, "end_char_pos": 31}, {"type": "R", "before": "are strongly correlated with the levels of the ACE2 protein in the lungsbut surprisingly, despite ACE2 is", "after": "correlate strongly with", "start_char_pos": 113, "end_char_pos": 218}, {"type": "R", "before": "viral receptor, higher levels lead to lower fatality. This behaviour is consistent with", "after": "level of the viral receptor protein ACE2 in rat lungs, which is consistent with the still limited and apparently contradictory data on human ACE2. Surprisingly, lower levels of the receptor correlate with higher fatality. However,", "start_char_pos": 223, "end_char_pos": 310}, {"type": "D", "before": "that", "after": null, "start_char_pos": 341, "end_char_pos": 345}, {"type": "R", "before": "SARS-Cov-2 degrades ACE2 and thus worsens", "after": "Moreover, many manifestations of severe CoViD-19, such as severe", "start_char_pos": 460, "end_char_pos": 501}, {"type": "D", "before": "causes vasoconstriction, thrombotic problems, and", "after": null, "start_char_pos": 515, "end_char_pos": 564}, {"type": "R", "before": ". I developed", "after": "and thrombotic problems may derive from increased Angiotensin II (Ang-II) level that results from degradation of ACE2 by the virus. I present here", "start_char_pos": 599, "end_char_pos": 612}, {"type": "R", "before": "on the negative effects of its degradation", "after": "disease severity", "start_char_pos": 690, "end_char_pos": 732}, {"type": "R", "before": "SARS-CoV-2", "after": "Covid-19", "start_char_pos": 750, "end_char_pos": 760}, {"type": "R", "before": "gender", "after": "sex", "start_char_pos": 790, "end_char_pos": 796}, {"type": "R", "before": ". Rescaling", "after": "under the hypothesis that SARS-CoV-2 infections are in the dynamical regimes in which increased receptor slows down viral propagation. Moreover, rescaling", "start_char_pos": 826, "end_char_pos": 837}, {"type": "R", "before": "with the", "after": "by the ratio of the", "start_char_pos": 859, "end_char_pos": 867}, {"type": "R", "before": "gender, in particular its higher severity for young patients,", "after": "sex,", "start_char_pos": 994, "end_char_pos": 1055}, {"type": "R", "before": "These results support the suggestion that drugs that enhance the expression of ACE2, such as ACE inhibitors and", "after": "The presented model opposes the fear that", "start_char_pos": 1111, "end_char_pos": 1222}, {"type": "R", "before": ", constitute a promising", "after": "(ARB), suggested as a", "start_char_pos": 1253, "end_char_pos": 1277}, {"type": "R", "before": ". Furthermore,", "after": ", may favour viral propagation, and suggests that Ang-II and", "start_char_pos": 1331, "end_char_pos": 1345}, {"type": "R", "before": "is a candidate prognostic factor", "after": "are candidate prognostic factors", "start_char_pos": 1351, "end_char_pos": 1383}], "sents_char_pos": [0, 83, 276, 459, 600, 734, 1110]} {"doc_id": "2004.08444", "revision_depth": "1", "before_revision": "Approximate nearest neighbor search (ANNS) is a long-studied problem in computational geometry that has received considerable attentions by researchers in the community. In this paper, we revisit the problem in the presence of curves under the Fr\\'echet distance \\Reals . Given a set %DIFDELCMD < {\\cal %%% P of n curves of size at most m each in \\mathbb{R \\Reals and a real \\delta>0, we aim to preprocess %DIFDELCMD < {\\cal %%% P into a data structure so that for any given query curve Q of size k, report all curves in %DIFDELCMD < {\\cal %%% P whose Fr\\'echet distances to Q are at most \\delta. In case that k is known in the preprocessing stage \\eps we propose a fully deterministic data structure whose space is O(n ( 32d\\big(\\big\\{\\big(}{\\eps}}\\big) ^{ 1/2 } /\\varepsilon^3 )\\big(}{\\eps^2}}\\big) ^{ d(k+1) } ) and can answer the \\big\\}\\big) \\textsc{(1+ \\varepsilon \\eps )\\delta-ANNS} queries in O(kd) query time \\D . Considering k as part of the query slightly changes the space to O( n ( 64d^{1/2/\\varepsilon^3}%DIFDELCMD < }%%% )^{md\\big({\\eps}}\\big) with O(kd) query time within 5 (1 + \\varepsilon) approximation factor . We also \\eps show that our data structure could give an alternative treatment of the approximate subtrajectory range counting (\\textsc{ASRC ) problem studied by de Berg et al. ~\\mbox{%DIFAUXCMD \\cite{bcg-ffq-13}\\hspace{0pt}%DIFAUXCMD } [ [ \\eps\\eps .", "after_revision": "Approximate near-neighbors search (ANNS) is a long-studied problem in computational geometry . \\% that has received considerable attention by researchers in the community. In this paper, we revisit the problem and propose the first data structure for curves under the (continuous) Fr\\'echet distance in\\Reals^d . Given a set %DIFDELCMD < {\\cal %%% \\P of n curves of size at most m each in \\Reals^d, and a real fixed \\delta>0, we aim to preprocess %DIFDELCMD < {\\cal %%% \\P into a data structure so that for any given query curve Q of size k, we can efficiently report all curves in %DIFDELCMD < {\\cal %%% \\P whose Fr\\'echet distances to Q are at most \\delta. In the case that k is given in the preprocessing stage , for any\\eps>0 we propose a deterministic data structure whose space is n \\cdot O\\big(\\max\\big\\{\\big(\\frac{\\sqrt{d}{\\eps}}\\big) ^{ kd } ,\\big(\\frac{\\D\\sqrt{d}{\\eps^2}}\\big) ^{ kd } \\big\\}\\big) that can answer \\textsc{(1+ \\eps )\\delta-ANNS} queries in O(kd) query time , where\\D is the diameter of \\P . Considering k as part of the query slightly changes the space to /\\varepsilon^3}%DIFDELCMD < }%%% n \\cdot O\\big(\\frac{1{\\eps}}\\big)^{md with O(kd) query time within an approximation factor of 5 + \\eps. We show that our generic data structure for ANNS can give an alternative treatment of the approximate subtrajectory range \\textsc{ searching problem studied by de Berg et al. }\\hspace{0pt}%DIFAUXCMD } [8 . We also revisit the time-window data structure for spatial density maps in[6 . Given \\theta>0, and n time-stamped points spread over m regions in a map, for any query window W, we propose a data structure of size O(n/\\eps^2) and construction time O((n+m)/\\eps^2) that can approximately return the regions containing at least \\theta points whose times are within W in O(1) query time .", "edit_actions": [{"type": "R", "before": "nearest neighbor", "after": "near-neighbors", "start_char_pos": 12, "end_char_pos": 28}, {"type": "A", "before": null, "after": ". \\%", "start_char_pos": 95, "end_char_pos": 95}, {"type": "R", "before": "attentions", "after": "attention", "start_char_pos": 127, "end_char_pos": 137}, {"type": "R", "before": "in the presence of", "after": "and propose the first data structure for", "start_char_pos": 209, "end_char_pos": 227}, {"type": "A", "before": null, "after": "(continuous)", "start_char_pos": 245, "end_char_pos": 245}, {"type": "A", "before": null, "after": "in", "start_char_pos": 265, "end_char_pos": 265}, {"type": "A", "before": null, "after": "^d", "start_char_pos": 271, "end_char_pos": 271}, {"type": "R", "before": "P", "after": "\\P", "start_char_pos": 309, "end_char_pos": 310}, {"type": "D", "before": "\\mathbb{R", "after": null, "start_char_pos": 349, "end_char_pos": 358}, {"type": "A", "before": null, "after": "^d,", "start_char_pos": 365, "end_char_pos": 365}, {"type": "A", "before": null, "after": "fixed", "start_char_pos": 377, "end_char_pos": 377}, {"type": "R", "before": "P", "after": "\\P", "start_char_pos": 432, "end_char_pos": 433}, {"type": "A", "before": null, "after": "we can efficiently", "start_char_pos": 503, "end_char_pos": 503}, {"type": "R", "before": "P", "after": "\\P", "start_char_pos": 548, "end_char_pos": 549}, {"type": "A", "before": null, "after": "the", "start_char_pos": 604, "end_char_pos": 604}, {"type": "R", "before": "known", "after": "given", "start_char_pos": 620, "end_char_pos": 625}, {"type": "A", "before": null, "after": ", for any", "start_char_pos": 653, "end_char_pos": 653}, {"type": "A", "before": null, "after": ">0", "start_char_pos": 657, "end_char_pos": 657}, {"type": "D", "before": "fully", "after": null, "start_char_pos": 671, "end_char_pos": 676}, {"type": "D", "before": "O(n (", "after": null, "start_char_pos": 721, "end_char_pos": 726}, {"type": "R", "before": "32d", "after": "n \\cdot O", "start_char_pos": 727, "end_char_pos": 730}, {"type": "A", "before": null, "after": "\\max", "start_char_pos": 735, "end_char_pos": 735}, {"type": "A", "before": null, "after": "\\frac{\\sqrt{d", "start_char_pos": 746, "end_char_pos": 746}, {"type": "R", "before": "1/2", "after": "kd", "start_char_pos": 763, "end_char_pos": 766}, {"type": "D", "before": "/\\varepsilon^3", "after": null, "start_char_pos": 769, "end_char_pos": 783}, {"type": "R", "before": ")", "after": ",", "start_char_pos": 784, "end_char_pos": 785}, {"type": "A", "before": null, "after": "\\frac{\\D\\sqrt{d", "start_char_pos": 790, "end_char_pos": 790}, {"type": "R", "before": "d(k+1)", "after": "kd", "start_char_pos": 809, "end_char_pos": 815}, {"type": "D", "before": ") and can answer the", "after": null, "start_char_pos": 818, "end_char_pos": 838}, {"type": "A", "before": null, "after": "that can answer", "start_char_pos": 851, "end_char_pos": 851}, {"type": "D", "before": "\\varepsilon", "after": null, "start_char_pos": 864, "end_char_pos": 875}, {"type": "A", "before": null, "after": ", where", "start_char_pos": 923, "end_char_pos": 923}, {"type": "A", "before": null, "after": "is the diameter of \\P", "start_char_pos": 926, "end_char_pos": 926}, {"type": "D", "before": "O( n (", "after": null, "start_char_pos": 994, "end_char_pos": 1000}, {"type": "D", "before": "64d^{1/2", "after": null, "start_char_pos": 1001, "end_char_pos": 1009}, {"type": "R", "before": ")^{md", "after": "n \\cdot O", "start_char_pos": 1042, "end_char_pos": 1047}, {"type": "A", "before": null, "after": "\\frac{1", "start_char_pos": 1052, "end_char_pos": 1052}, {"type": "A", "before": null, "after": "^{md", "start_char_pos": 1064, "end_char_pos": 1064}, {"type": "A", "before": null, "after": "an approximation factor of", "start_char_pos": 1094, "end_char_pos": 1094}, {"type": "D", "before": "(1", "after": null, "start_char_pos": 1097, "end_char_pos": 1099}, {"type": "D", "before": "\\varepsilon) approximation factor . We also", "after": null, "start_char_pos": 1102, "end_char_pos": 1145}, {"type": "A", "before": null, "after": ". We", "start_char_pos": 1150, "end_char_pos": 1150}, {"type": "R", "before": "data structure could", "after": "generic data structure for ANNS can", "start_char_pos": 1165, "end_char_pos": 1185}, {"type": "D", "before": "counting (", "after": null, "start_char_pos": 1255, "end_char_pos": 1265}, {"type": "D", "before": "ASRC", "after": null, "start_char_pos": 1273, "end_char_pos": 1277}, {"type": "R", "before": ")", "after": "searching", "start_char_pos": 1278, "end_char_pos": 1279}, {"type": "D", "before": "~\\mbox{%DIFAUXCMD \\cite{bcg-ffq-13", "after": null, "start_char_pos": 1314, "end_char_pos": 1348}, {"type": "A", "before": null, "after": "8", "start_char_pos": 1375, "end_char_pos": 1375}, {"type": "A", "before": null, "after": ". We also revisit the time-window data structure for spatial density maps in", "start_char_pos": 1376, "end_char_pos": 1376}, {"type": "A", "before": null, "after": "6", "start_char_pos": 1377, "end_char_pos": 1377}, {"type": "A", "before": null, "after": ". Given \\theta>0, and n time-stamped points spread over m regions in a map, for any query window W, we propose a data structure of size O(n/", "start_char_pos": 1378, "end_char_pos": 1378}, {"type": "A", "before": null, "after": "^2) and construction time O((n+m)/", "start_char_pos": 1382, "end_char_pos": 1382}, {"type": "A", "before": null, "after": "^2) that can approximately return the regions containing at least \\theta points whose times are within W in O(1) query time", "start_char_pos": 1386, "end_char_pos": 1386}], "sents_char_pos": [0, 170, 600, 759, 1137, 1313]} {"doc_id": "2004.10117", "revision_depth": "2", "before_revision": "In Coronavirus disease 2019 (COVID-19), the initial viral replication phase is often followed by a hyperinflammatory reaction in the lungs and URLan systems ('cytokine storm syndrome') that leads to acute respiratory distress syndrome (ARDS), URLan failure, and death- despite maximal supportive care. Preventing hyperinflammation is key to avoiding progression to severe stages of COVID-19. We have previously demonstrated that alpha-1 adrenergic receptor (\\alpha_1-AR) antagonists can prevent cytokine storm syndrome and resulting death in mice. Here, we present a retrospective study of outcomes in patients with acute respiratory distress (n = 13,125) or pneumonia (n = 108,956) . Patients with acute respiratory distress who were taking \\alpha_1-AR antagonists for other conditions had a 35\\% reduced risk of requiring ventilation , and a 56\\% reduced risk of ventilation and death, compared to non-users (adjusted OR = 0.41 , 95\\% CI 0.17-0.81, p = 0.01). By contrast, no significant effect was observed for beta-adrenergic receptor (\\beta-AR) antagonists . These results support studying \\alpha_1-AR antagonists for preventing ARDS and reducing mortality in pneumonia and acute respiratory distress, as well as highlight the need for prospective trials of \\alpha_1-AR antagonists to assess their efficacy in preventing cytokine storm syndrome and death in COVID-19.", "after_revision": "In severe pneumonias, including Coronavirus disease 2019 (COVID-19), the viral replication phase is often followed by a hyperinflammatory reaction ('cytokine storm syndrome') that leads to acute respiratory distress syndrome and death, despite maximal supportive care. Preventing hyperinflammation is key to avoiding these outcomes. We previously demonstrated that alpha-1 adrenergic receptor antagonists (\\alpha-blockers) can prevent cytokine storm syndrome and death in mice. Here, we conduct a retrospective analysis of patients with acute respiratory distress (n = 13,125) or pneumonia (n = 108,956) from all causes; patients who were incidentally taking \\alpha-blockers had a reduced risk of requiring ventilation (by 35\\% and 16\\%, respectively), and a reduced risk of being ventilated and dying (by 56\\% and 20\\%, respectively) , compared to non-users . Beta-adrenergic receptor antagonists had no significant effects. These results highlight the urgent need for prospective trials testing whether prophylactic \\alpha-blockers improve outcomes in diseases with a prominent hyperinflammatory component such as COVID-19.", "edit_actions": [{"type": "A", "before": null, "after": "severe pneumonias, including", "start_char_pos": 3, "end_char_pos": 3}, {"type": "D", "before": "initial", "after": null, "start_char_pos": 45, "end_char_pos": 52}, {"type": "D", "before": "in the lungs and URLan systems", "after": null, "start_char_pos": 127, "end_char_pos": 157}, {"type": "R", "before": "(ARDS), URLan failure, and death-", "after": "and death,", "start_char_pos": 236, "end_char_pos": 269}, {"type": "R", "before": "progression to severe stages of COVID-19. We have", "after": "these outcomes. We", "start_char_pos": 351, "end_char_pos": 400}, {"type": "R", "before": "(\\alpha_1-AR) antagonists", "after": "antagonists (\\alpha-blockers)", "start_char_pos": 458, "end_char_pos": 483}, {"type": "D", "before": "resulting", "after": null, "start_char_pos": 524, "end_char_pos": 533}, {"type": "R", "before": "present a retrospective study of outcomes in", "after": "conduct a retrospective analysis of", "start_char_pos": 558, "end_char_pos": 602}, {"type": "R", "before": ". Patients with acute respiratory distress who were taking \\alpha_1-AR antagonists for other conditions had a 35\\%", "after": "from all causes; patients who were incidentally taking \\alpha-blockers had a", "start_char_pos": 684, "end_char_pos": 798}, {"type": "R", "before": ", and a 56\\%", "after": "(by 35\\% and 16\\%, respectively), and a", "start_char_pos": 837, "end_char_pos": 849}, {"type": "R", "before": "ventilation and death, compared to non-users (adjusted OR = 0.41", "after": "being ventilated and dying (by 56\\% and 20\\%, respectively)", "start_char_pos": 866, "end_char_pos": 930}, {"type": "R", "before": "95\\% CI 0.17-0.81, p = 0.01). By contrast, no significant effect was observed for beta-adrenergic receptor (\\beta-AR) antagonists", "after": "compared to non-users", "start_char_pos": 933, "end_char_pos": 1062}, {"type": "A", "before": null, "after": "Beta-adrenergic receptor antagonists had no significant effects.", "start_char_pos": 1065, "end_char_pos": 1065}, {"type": "R", "before": "support studying \\alpha_1-AR antagonists for preventing ARDS and reducing mortality in pneumonia and acute respiratory distress, as well as highlight the", "after": "highlight the urgent", "start_char_pos": 1080, "end_char_pos": 1233}, {"type": "R", "before": "of \\alpha_1-AR antagonists to assess their efficacy in preventing cytokine storm syndrome and death in", "after": "testing whether prophylactic \\alpha-blockers improve outcomes in diseases with a prominent hyperinflammatory component such as", "start_char_pos": 1262, "end_char_pos": 1364}], "sents_char_pos": [0, 302, 392, 548, 685, 962, 1064]} {"doc_id": "2004.10282", "revision_depth": "1", "before_revision": "We introduce a learning-based strategy for multi-modal registration of images acquired with any modality, without requiring real dataduring training . While classical registration methods can accurately align multi-modal image pairs , they solve a costly optimization problem for every new pairof images . Learning-based techniques are fast at test time, but can only register images of the specific anatomy and modalities they were trained on . In contrast, our approach leverages a generative model to synthesize label maps and gray-scale images that expose a network to a wide range of anatomy and contrast during training. We demonstrate that this strategy enables robust registration of arbitrary modalities, without the need to retrain for a new modality. Critically, we show that input labels need not be of actual anatomy: training on randomly synthesized shapes , or supervoxels, results in competitive registration performance and makes the network agnostic to anatomy and contrast, all while eradicating the need for real data . We present extensive experiments demonstrating that this strategy enables registration of modalities not seen during training and surpasses the state of art in cross-contrast registration . Our code is integrated with the VoxelMorph library at: URL", "after_revision": "We introduce a learning strategy for contrast-invariant image registration without requiring imaging data . While classical registration methods accurately estimate the spatial correspondence between images , they solve a costly optimization problem for every image pair . Learning-based techniques are fast at test time, but can only register images with image contrast and geometric content that are similar to those available during training. We focus on removing this image-data dependency of learning methods. Our approach leverages a generative model for diverse label maps and images that exposes networks to a wide range of variability during training, forcing them to learn features invariant to image type (contrast). This strategy results in powerful networks trained to generalize to a broad array of real input images. We present extensive experiments, with a focus on 3D neuroimaging, showing that this strategy enables robust registration of arbitrary image contrasts without the need to retrain for new modalities. We demonstrate registration accuracy that most often surpasses the state of the art both within and across modalities, using a single model. Critically, we show that input labels from which we synthesize images need not be of actual anatomy: training on randomly generated geometric shapes also results in competitive registration performance , albeit slightly less accurate, while alleviating the dependency on real data of any kind . Our code is available at: URL", "edit_actions": [{"type": "R", "before": "learning-based strategy for multi-modal registration of images acquired with any modality, without requiring real dataduring training", "after": "learning strategy for contrast-invariant image registration without requiring imaging data", "start_char_pos": 15, "end_char_pos": 148}, {"type": "R", "before": "can accurately align multi-modal image pairs", "after": "accurately estimate the spatial correspondence between images", "start_char_pos": 188, "end_char_pos": 232}, {"type": "R", "before": "new pairof images", "after": "image pair", "start_char_pos": 286, "end_char_pos": 303}, {"type": "R", "before": "of the specific anatomy and modalities they were trained on . In contrast, our", "after": "with image contrast and geometric content that are similar to those available during training. We focus on removing this image-data dependency of learning methods. Our", "start_char_pos": 384, "end_char_pos": 462}, {"type": "R", "before": "to synthesize", "after": "for diverse", "start_char_pos": 501, "end_char_pos": 514}, {"type": "R", "before": "gray-scale images that expose a network", "after": "images that exposes networks", "start_char_pos": 530, "end_char_pos": 569}, {"type": "R", "before": "anatomy and contrast during training. We demonstrate", "after": "variability during training, forcing them to learn features invariant to image type (contrast). This strategy results in powerful networks trained to generalize to a broad array of real input images. We present extensive experiments, with a focus on 3D neuroimaging, showing", "start_char_pos": 589, "end_char_pos": 641}, {"type": "R", "before": "modalities,", "after": "image contrasts", "start_char_pos": 702, "end_char_pos": 713}, {"type": "R", "before": "a new modality.", "after": "new modalities. We demonstrate registration accuracy that most often surpasses the state of the art both within and across modalities, using a single model.", "start_char_pos": 746, "end_char_pos": 761}, {"type": "A", "before": null, "after": "from which we synthesize images", "start_char_pos": 800, "end_char_pos": 800}, {"type": "R", "before": "synthesized shapes , or supervoxels,", "after": "generated geometric shapes also", "start_char_pos": 853, "end_char_pos": 889}, {"type": "R", "before": "and makes the network agnostic to anatomy and contrast, all while eradicating the need for real data . We present extensive experiments demonstrating that this strategy enables registration of modalities not seen during training and surpasses the state of art in cross-contrast registration", "after": ", albeit slightly less accurate, while alleviating the dependency on real data of any kind", "start_char_pos": 938, "end_char_pos": 1228}, {"type": "R", "before": "integrated with the VoxelMorph library", "after": "available", "start_char_pos": 1243, "end_char_pos": 1281}], "sents_char_pos": [0, 150, 445, 626, 761, 1040, 1230]} {"doc_id": "2004.11114", "revision_depth": "2", "before_revision": "While deep neural networks (DNNs) are being increasingly used to make predictions from high-dimensional, complex data , they are widely seen as uninterpretable \"black boxes\", since it can be difficult to discover what input information is used to make predictions. This ability is particularly important for applications in cognitive neuroscience and neuroinformatics . A saliency map is a common approach for producing interpretable visualizations of the relative importance of input features for a prediction. However, many methods for creating these maps fail due to focusing too much on the input or being extremely sensitive to small input noise . It is also challenging to quantitatively evaluate how well saliency maps correspond to the truly relevant input information . In this paper, we develop two quantitative evaluation procedures for saliency methods, using the fact that the Human Connectome Project (HCP) dataset contains functional magnetic resonance imaging (fMRI ) datafrom multiple tasks per subject to create ground truth saliency maps . We then introduce an adversarial training method that makes DNNs robust to small input noise, and demonstrate that it measurably improves interpretability .", "after_revision": "Deep neural networks (DNNs) are being increasingly used to make predictions from functional magnetic resonance imaging (fMRI) data. However , they are widely seen as uninterpretable \"black boxes\", as it can be difficult to discover what input information is used by the DNN in the process, something important in both cognitive neuroscience and clinical applications . A saliency map is a common approach for producing interpretable visualizations of the relative importance of input features for a prediction. However, methods for creating maps often fail due to DNNs being sensitive to input noise, or by focusing too much on the input and too little on the model . It is also challenging to evaluate how well saliency maps correspond to the truly relevant input information , as ground truth is not always available . In this paper, we review a variety of methods for producing gradient-based saliency maps, and present a new adversarial training method we developed to make DNNs robust to input noise, with the goal of improving interpretability. We introduce two quantitative evaluation procedures for saliency map methods in fMRI, applicable whenever a DNN or linear model is being trained to decode some information from imaging data. We evaluate the procedures using a synthetic dataset where the complex activation structure is known, and on saliency maps produced for DNN and linear models for task decoding in the Human Connectome Project (HCP) dataset . Our key finding is that saliency maps produced with different methods vary widely in interpretability, in both in synthetic and HCP fMRI data. Strikingly, even when DNN and linear models decode at comparable levels of performance, DNN saliency maps score higher on interpretability than linear model saliency maps (derived via weights or gradient). Finally, saliency maps produced with our adversarial training method outperform those from other methods .", "edit_actions": [{"type": "R", "before": "While deep", "after": "Deep", "start_char_pos": 0, "end_char_pos": 10}, {"type": "R", "before": "high-dimensional, complex data", "after": "functional magnetic resonance imaging (fMRI) data. However", "start_char_pos": 87, "end_char_pos": 117}, {"type": "R", "before": "since", "after": "as", "start_char_pos": 175, "end_char_pos": 180}, {"type": "R", "before": "to make predictions. This ability is particularly important for applications in", "after": "by the DNN in the process, something important in both", "start_char_pos": 244, "end_char_pos": 323}, {"type": "R", "before": "neuroinformatics", "after": "clinical applications", "start_char_pos": 351, "end_char_pos": 367}, {"type": "D", "before": "many", "after": null, "start_char_pos": 521, "end_char_pos": 525}, {"type": "R", "before": "these maps", "after": "maps often", "start_char_pos": 547, "end_char_pos": 557}, {"type": "A", "before": null, "after": "DNNs being sensitive to input noise, or by", "start_char_pos": 570, "end_char_pos": 570}, {"type": "R", "before": "or being extremely sensitive to small input noise", "after": "and too little on the model", "start_char_pos": 602, "end_char_pos": 651}, {"type": "D", "before": "quantitatively", "after": null, "start_char_pos": 680, "end_char_pos": 694}, {"type": "A", "before": null, "after": ", as ground truth is not always available", "start_char_pos": 778, "end_char_pos": 778}, {"type": "R", "before": "develop", "after": "review a variety of methods for producing gradient-based saliency maps, and present a new adversarial training method we developed to make DNNs robust to input noise, with the goal of improving interpretability. We introduce", "start_char_pos": 799, "end_char_pos": 806}, {"type": "R", "before": "methods, using", "after": "map methods in fMRI, applicable whenever a DNN or linear model is being trained to decode some information from imaging data. We evaluate the procedures using a synthetic dataset where the complex activation structure is known, and on saliency maps produced for DNN and linear models for task decoding in", "start_char_pos": 859, "end_char_pos": 873}, {"type": "D", "before": "fact that the", "after": null, "start_char_pos": 878, "end_char_pos": 891}, {"type": "R", "before": "contains functional magnetic resonance imaging (fMRI ) datafrom multiple tasks per subject to create ground truth saliency maps . We then introduce an", "after": ". Our key finding is that saliency maps produced with different methods vary widely in interpretability, in both in synthetic and HCP fMRI data. Strikingly, even when DNN and linear models decode at comparable levels of performance, DNN saliency maps score higher on interpretability than linear model saliency maps (derived via weights or gradient). Finally, saliency maps produced with our", "start_char_pos": 931, "end_char_pos": 1081}, {"type": "R", "before": "that makes DNNs robust to small input noise, and demonstrate that it measurably improves interpretability", "after": "outperform those from other methods", "start_char_pos": 1110, "end_char_pos": 1215}], "sents_char_pos": [0, 264, 369, 511, 653, 780, 1060]} {"doc_id": "2004.13102", "revision_depth": "1", "before_revision": "In many high-stakes domains such as criminal justice, finance, and healthcare, AI systems may recommend actions to a human expert responsible for final decisions, a context known as AI-advised decision making. When AI practitioners deploy the most accurate system in these domains, they implicitly assume that the system will function alone in the world. We argue that the most accurate AI team-mate is not necessarily the em best teammate; for example, predictable performance is worth a slight sacrifice in AI accuracy. So, we propose training AI systems in a human-centered manner and directly optimizing for team performance. We study this proposal for a specific type of human-AI team , where the human overseer chooses to accept the AI recommendation or solve the task themselves. To optimize the team performance we maximize the team's expected utility, expressed in terms of quality of the final decision, cost of verifying, and individual accuracies . Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the improvements in utility while being small and varying across datasetsand parameters (such as cost of mistake), are real and consistent with our definition of team utility . We discuss the shortcoming of current optimization approaches beyond well-studied loss functions such as log-loss, and encourage future work on human-centered optimization problems motivated by human-AI collaborations .", "after_revision": "AI practitioners typically strive to develop the most accurate systems, making an implicit assumption that the AI system will function autonomously. However, in practice, AI systems often are used to provide advice to people in domains ranging from criminal justice and finance to healthcare. In such AI-advised decision making, humans and machines form a team, where the human is responsible for making final decisions. But is the most accurate AI the best teammate? We argue \"No\" -- predictable performance may be worth a slight sacrifice in AI accuracy. Instead, we argue that AI systems should be trained in a human-centered manner , directly optimized for team performance. We study this proposal for a specific type of human-AI teaming , where the human overseer chooses to either accept the AI recommendation or solve the task themselves. To optimize the team performance for this setting we maximize the team's expected utility, expressed in terms of the quality of the final decision, cost of verifying, and individual accuracies of people and machines . Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance and show the benefit of modeling teamwork during training through improvements in expected team utility across datasets, considering parameters such as human skill and the cost of mistakes . We discuss the shortcoming of current optimization approaches beyond well-studied loss functions such as log-loss, and encourage future work on AI optimization problems motivated by human-AI collaboration .", "edit_actions": [{"type": "R", "before": "In many high-stakes domains such as criminal justice, finance, and healthcare, AI systems may recommend actions to a human expert responsible for final decisions, a context known as AI-advised decision making. When AI practitioners deploy", "after": "AI practitioners typically strive to develop", "start_char_pos": 0, "end_char_pos": 238}, {"type": "R", "before": "system in these domains, they implicitly assume that the", "after": "systems, making an implicit assumption that the AI", "start_char_pos": 257, "end_char_pos": 313}, {"type": "R", "before": "alone in", "after": "autonomously. However, in practice, AI systems often are used to provide advice to people in domains ranging from criminal justice and finance to healthcare. In such AI-advised decision making, humans and machines form a team, where the human is responsible for making final decisions. But is", "start_char_pos": 335, "end_char_pos": 343}, {"type": "D", "before": "world. We argue that the", "after": null, "start_char_pos": 348, "end_char_pos": 372}, {"type": "R", "before": "team-mate is not necessarily the em best teammate; for example, predictable performance is", "after": "the best teammate? We argue \"No\" -- predictable performance may be", "start_char_pos": 390, "end_char_pos": 480}, {"type": "R", "before": "So, we propose training AI systems", "after": "Instead, we argue that AI systems should be trained", "start_char_pos": 522, "end_char_pos": 556}, {"type": "R", "before": "and directly optimizing", "after": ", directly optimized", "start_char_pos": 584, "end_char_pos": 607}, {"type": "R", "before": "team", "after": "teaming", "start_char_pos": 685, "end_char_pos": 689}, {"type": "A", "before": null, "after": "either", "start_char_pos": 728, "end_char_pos": 728}, {"type": "A", "before": null, "after": "for this setting", "start_char_pos": 821, "end_char_pos": 821}, {"type": "A", "before": null, "after": "the", "start_char_pos": 885, "end_char_pos": 885}, {"type": "A", "before": null, "after": "of people and machines", "start_char_pos": 962, "end_char_pos": 962}, {"type": "R", "before": "improvements in utility while being small and varying across datasetsand parameters (such as cost of mistake), are real and consistent with our definition of team utility", "after": "most accuracy AI may not lead to highest team performance and show the benefit of modeling teamwork during training through improvements in expected team utility across datasets, considering parameters such as human skill and the cost of mistakes", "start_char_pos": 1065, "end_char_pos": 1235}, {"type": "R", "before": "human-centered", "after": "AI", "start_char_pos": 1382, "end_char_pos": 1396}, {"type": "R", "before": "collaborations", "after": "collaboration", "start_char_pos": 1441, "end_char_pos": 1455}], "sents_char_pos": [0, 209, 354, 440, 521, 629, 787, 964, 1237]} {"doc_id": "2004.13536", "revision_depth": "1", "before_revision": "For the sake of extracting hidden mutual and coupled information from possibly uncoupled time-series, we explored the profound measures of network science on time-series . Alongside common methods in time-series analysis of coupling between financial and economic markets, mapping coupled time-series onto networks is an outstanding measure to provide insight into hidden aspects embedded in couplings intrinsically. In this manner, we discretize the amplitude of coupled time-series and investigate relative simultaneous locations of the corresponding amplitudes(nodes). The transmissions between simultaneous amplitudes are clarified by edges in the network. In this sense, by segmenting magnitudes, the scaling features, volatilities' size and also the direction of the coupled amplitudes can be described. The frequency of occurrences of the coupled amplitudes is illustrated by the weighted edges, that is to say, some coupled amplitudes in the time-series can be identified as communities in the network . The results show that despite apparently uncoupled joint probabilities, the couplings possess some aspects which diverge from random Gaussian noise. Thereby, with the aid of the network's topological and statistical measurements , we distinguished basic structures of coupling of cross-market networks. Meanwhile, it was discovered that even two possibly known uncoupled markets may possess coupled patterns with each other. Thereby, those markets should be examined as coupled and weakly coupled markets! \\textit{ ", "after_revision": "In order to extract hidden joint information from two possibly uncorrelated time-series, we explored the measures of network science . Alongside common methods in time-series analysis of the economic markets, mapping the joint structure of two time-series onto a network provides insight into hidden aspects embedded in the couplings. We discretize the amplitude of two time-series and investigate relative simultaneous locations of those amplitudes. Each segment of a discretized amplitude is considered as a node. The simultaneity of the amplitudes of the two time-series is considered as the edges in the network. The frequency of occurrences forms the weighted edges. In order to extract information, we need to measure that to what extent the coupling deviates from the coupling of two uncoupled series. Also, we need to measure that to what extent the couplings inherit their characteristics from a Gaussian distribution or a non-Gaussian distribution. We mapped the network from two surrogate time-series . The results show that the couplings of markets possess some features which diverge from the same features of the network mapped from white noise, and from the network mapped from two surrogate time-series. These deviations prove that there exist joint information and cross-correlation therein. By applying the network's topological and statistical measures and the deformation ratio in the joint probability distribution , we distinguished basic structures of cross-correlation and coupling of cross-markets. It was discovered that even two possibly known uncorrelated markets may possess some joint patterns with each other. Thereby, those markets should be examined as coupled and \\textit{weakly coupled markets.", "edit_actions": [{"type": "R", "before": "For the sake of extracting hidden mutual and coupled information from possibly uncoupled", "after": "In order to extract hidden joint information from two possibly uncorrelated", "start_char_pos": 0, "end_char_pos": 88}, {"type": "D", "before": "profound", "after": null, "start_char_pos": 118, "end_char_pos": 126}, {"type": "D", "before": "on time-series", "after": null, "start_char_pos": 155, "end_char_pos": 169}, {"type": "R", "before": "coupling between financial and", "after": "the", "start_char_pos": 224, "end_char_pos": 254}, {"type": "R", "before": "coupled", "after": "the joint structure of two", "start_char_pos": 281, "end_char_pos": 288}, {"type": "R", "before": "networks is an outstanding measure to provide", "after": "a network provides", "start_char_pos": 306, "end_char_pos": 351}, {"type": "R", "before": "couplings intrinsically. In this manner, we", "after": "the couplings. We", "start_char_pos": 392, "end_char_pos": 435}, {"type": "R", "before": "coupled", "after": "two", "start_char_pos": 464, "end_char_pos": 471}, {"type": "R", "before": "the corresponding amplitudes(nodes). The transmissions between simultaneous amplitudes are clarified by", "after": "those amplitudes. Each segment of a discretized amplitude is considered as a node. The simultaneity of the amplitudes of the two time-series is considered as the", "start_char_pos": 535, "end_char_pos": 638}, {"type": "D", "before": "In this sense, by segmenting magnitudes, the scaling features, volatilities' size and also the direction of the coupled amplitudes can be described.", "after": null, "start_char_pos": 661, "end_char_pos": 809}, {"type": "R", "before": "of the coupled amplitudes is illustrated by the weighted edges, that is to say, some coupled amplitudes in the", "after": "forms the weighted edges. In order to extract information, we need to measure that to what extent the coupling deviates from the coupling of two uncoupled series. Also, we need to measure that to what extent the couplings inherit their characteristics from a Gaussian distribution or a non-Gaussian distribution. We mapped the network from two surrogate", "start_char_pos": 839, "end_char_pos": 949}, {"type": "D", "before": "can be identified as communities in the network", "after": null, "start_char_pos": 962, "end_char_pos": 1009}, {"type": "R", "before": "despite apparently uncoupled joint probabilities, the couplings possess some aspects", "after": "the couplings of markets possess some features", "start_char_pos": 1034, "end_char_pos": 1118}, {"type": "R", "before": "random Gaussian noise. Thereby, with the aid of", "after": "the same features of the network mapped from white noise, and from the network mapped from two surrogate time-series. These deviations prove that there exist joint information and cross-correlation therein. By applying", "start_char_pos": 1138, "end_char_pos": 1185}, {"type": "R", "before": "measurements", "after": "measures and the deformation ratio in the joint probability distribution", "start_char_pos": 1228, "end_char_pos": 1240}, {"type": "R", "before": "coupling of cross-market networks. Meanwhile, it", "after": "cross-correlation and coupling of cross-markets. It", "start_char_pos": 1280, "end_char_pos": 1328}, {"type": "R", "before": "uncoupled", "after": "uncorrelated", "start_char_pos": 1373, "end_char_pos": 1382}, {"type": "R", "before": "coupled", "after": "some joint", "start_char_pos": 1403, "end_char_pos": 1410}, {"type": "D", "before": "weakly coupled markets!", "after": null, "start_char_pos": 1494, "end_char_pos": 1517}, {"type": "A", "before": null, "after": "weakly", "start_char_pos": 1526, "end_char_pos": 1526}, {"type": "A", "before": null, "after": "coupled markets.", "start_char_pos": 1527, "end_char_pos": 1527}], "sents_char_pos": [0, 171, 416, 571, 660, 809, 1011, 1160, 1314, 1436, 1517]} {"doc_id": "2004.13614", "revision_depth": "1", "before_revision": "Assessing the impacts of COVID-19 are of paramount importance for global sustainability. Using a coordinated set of high-resolution sectoral assessment tools, we report a decrease of 4.2\\% in global CO_2 emission in first quarter of 2020. Our emission estimatesreflect near real time inventories of emissions } from power generation , transportation, industry, international aviation and maritime sectors in 34 countries that account for >70\\% of world energy-related CO2 emissions in recent years. Regional variations in CO_2 emissions are significant, with a decrease in China (-9.3\\% ) , US (-3.0 \\%), Europe ( EU-27 & UK)(-3.3\\%) and India (-2.4\\%) , respectively. The decline of short-lived gaseous pollutants, such as NO_2 concentration observed by Satellites (-25.73\\%for China, -4.76\\%for US) and ground observations (-23\\%for China)is consistent with the estimates based on energy activity (-23.94\\% for China, -3.52\\% for US), but the decline is not seen in satellite assessments of aerosol optical depth (AOD) or dry column CO_2 (XCO_2). With fast recovery and partial re-opening of national economies, our findings suggest that total annual emissions may drop far less than previously estimated (e.g. , by 25\\%for China and more than 5\\%for the whole world) . However, the longer-term effects on CO_2 emissions are unknown and should be carefully monitored using multiple measures .", "after_revision": "The unprecedented cessation of human activities during the COVID-19 pandemic has affected global energy use and CO2 emissions from fossil fuel use and cement production. Here we show that the decrease in global fossil CO2 emissions during the first quarter of 2020 was of 5.8\\% (542 Mt CO2 with a 20\\% 1- \\sigma} uncertainty). Unlike other emerging estimates, ours show the temporal dynamics of emissions based on actual emissions data from power generation (for 29 countries) and industry (for 73 countries), on near real time activity data for road transportation (for 132 countries), aviation and maritime transportation, and on heating degree days for commercial and residential sectors emissions (for 206 countries). These dynamic estimates cover all of the human induced CO2 emissions from fossil fuel combustion and cement production. The largest share of COVID-related decreases in emissions are due to decreases in industry (157.9 Mt CO2 , -7.1\\% compared to 2019), followed by road transportation (145.7 Mt CO2, -8.3 \\%), power generation (131.6 Mt CO2, -3.8\\%), residential (47.8 Mt CO2, -3.6\\%), fishing and maritime transport (35.5Mt CO2, -13.3\\%) and aviation (33.4 Mt CO2, -8.0\\%). Regionally, decreases in emissions from China were the largest and earliest (-10.3\\%), followed by Europe ( EU-27 & UK) (-4.3\\%) and the U.S. (-4.2\\%). Relative decreases of regional CO2 emissions are consistent with regional nitrogen oxides concentrations observed by satellites and ground-based networks. Despite the unprecedented decreases in CO2 emissions and comparable decreases in economic activities, we monitored decreases in the carbon intensity (Emission per unit of GDP) in China (3.5\\%) , the U.S. (4.5\\%) and Europe (5.4\\%) over the first quarter, suggesting that carbon-intensive activities have been disproportionally impacted .", "edit_actions": [{"type": "R", "before": "Assessing the impacts of", "after": "The unprecedented cessation of human activities during the", "start_char_pos": 0, "end_char_pos": 24}, {"type": "R", "before": "are of paramount importance for global sustainability. Using a coordinated set of high-resolution sectoral assessment tools, we report a decrease of 4.2\\% in global CO_2 emission in", "after": "pandemic has affected global energy use and CO2 emissions from fossil fuel use and cement production. Here we show that the decrease in global fossil CO2 emissions during the", "start_char_pos": 34, "end_char_pos": 215}, {"type": "R", "before": "2020. Our emission estimatesreflect near real time inventories of emissions", "after": "2020 was of 5.8\\% (542 Mt CO2 with a 20\\% 1-", "start_char_pos": 233, "end_char_pos": 308}, {"type": "A", "before": null, "after": "\\sigma", "start_char_pos": 309, "end_char_pos": 309}, {"type": "A", "before": null, "after": "uncertainty). Unlike other emerging estimates, ours show the temporal dynamics of emissions based on actual emissions data", "start_char_pos": 311, "end_char_pos": 311}, {"type": "R", "before": ", transportation, industry, international", "after": "(for 29 countries) and industry (for 73 countries), on near real time activity data for road transportation (for 132 countries),", "start_char_pos": 334, "end_char_pos": 375}, {"type": "R", "before": "sectors in 34 countries that account for >70\\% of world energy-related", "after": "transportation, and on heating degree days for commercial and residential sectors emissions (for 206 countries). These dynamic estimates cover all of the human induced", "start_char_pos": 398, "end_char_pos": 468}, {"type": "R", "before": "in recent years. Regional variations in CO_2 emissions are significant, with a decrease in China (-9.3\\% )", "after": "from fossil fuel combustion and cement production. The largest share of COVID-related decreases in emissions are due to decreases in industry (157.9 Mt CO2", "start_char_pos": 483, "end_char_pos": 589}, {"type": "R", "before": "US (-3.0", "after": "-7.1\\% compared to 2019), followed by road transportation (145.7 Mt CO2, -8.3", "start_char_pos": 592, "end_char_pos": 600}, {"type": "R", "before": "Europe (", "after": "power generation (131.6 Mt CO2, -3.8\\%), residential (47.8 Mt CO2, -3.6\\%), fishing and maritime transport (35.5Mt CO2, -13.3\\%) and aviation (33.4 Mt CO2, -8.0\\%). Regionally, decreases in emissions from China were the largest and earliest (-10.3\\%), followed by Europe (", "start_char_pos": 606, "end_char_pos": 614}, {"type": "R", "before": "UK)(-3.3\\%)", "after": "UK) (-4.3\\%)", "start_char_pos": 623, "end_char_pos": 634}, {"type": "R", "before": "India (-2.4\\%)", "after": "the U.S. (-4.2\\%). Relative decreases of regional CO2 emissions are consistent with regional nitrogen oxides concentrations observed by satellites and ground-based networks. Despite the unprecedented decreases in CO2 emissions and comparable decreases in economic activities, we monitored decreases in the carbon intensity (Emission per unit of GDP) in China (3.5\\%)", "start_char_pos": 639, "end_char_pos": 653}, {"type": "R", "before": "respectively. The decline of short-lived gaseous pollutants, such as NO_2 concentration observed by Satellites (-25.73\\%for China, -4.76\\%for US) and ground observations (-23\\%for China)is consistent with the estimates based on energy activity (-23.94\\% for China, -3.52\\% for US), but the decline is not seen in satellite assessments of aerosol optical depth (AOD) or dry column CO_2 (XCO_2). With fast recovery and partial re-opening of national economies, our findings suggest that total annual emissions may drop far less than previously estimated (e.g. , by 25\\%for China and more than 5\\%for the whole world) . However, the longer-term effects on CO_2 emissions are unknown and should be carefully monitored using multiple measures", "after": "the U.S. (4.5\\%) and Europe (5.4\\%) over the first quarter, suggesting that carbon-intensive activities have been disproportionally impacted", "start_char_pos": 656, "end_char_pos": 1393}], "sents_char_pos": [0, 88, 238, 499, 669, 1049, 1272]} {"doc_id": "2004.14821", "revision_depth": "1", "before_revision": "Neural machine translation (NMT) models do not work well in domains different from the training data. The standard approach to this problem is to build a small parallel data in the target domain and perform domain adaptation from a source domain where massive parallel data is available. However, domain adaptation between distant domains (e.g., subtitles and research papers) does not perform effectively because of mismatches in vocabulary; it will encounter many domain-specific unknown words (e.g., `angstrom' ) and words whose meanings shift across domains(e.g., `conductor' ). In this study, aiming to solve these vocabulary mismatches in distant domain adaptation , we propose vocabulary adaptation, a simple method for effective fine-tuning that adapts embedding layers in a given pre-trained NMT model to the target domain. Prior to fine-tuning, our method replaces word embeddings in embedding layers of the NMT model , by projecting general word embeddings induced from monolingual data in the target domain onto the source-domain embedding space. Experimental results on distant domain adaptation for English-to-Japanese translation and German-to-English translation indicate that our vocabulary adaptation improves the performance of fine-tuning by 3.6 BLEU points .", "after_revision": "Neural network methods exhibit strong performance only in a few resource-rich domains. Practitioners, therefore, employ domain adaptation from resource-rich domains that are, in most cases, distant from the target domain. Domain adaptation between distant domains (e.g., movie subtitles and research papers) , however, cannot be performed effectively due to mismatches in vocabulary; it will encounter many domain-specific words (e.g., \"angstrom\" ) and words whose meanings shift across domains(e.g., \"conductor\" ). In this study, aiming to solve these vocabulary mismatches in domain adaptation for neural machine translation (NMT) , we propose vocabulary adaptation, a simple method for effective fine-tuning that adapts embedding layers in a given pre-trained NMT model to the target domain. Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space. Experimental results indicate that our method improves the performance of conventional fine-tuning by 3.86 and 3.28 BLEU points in En-Ja and De-En translation, respectively .", "edit_actions": [{"type": "R", "before": "machine translation (NMT) models do not work well in domains different from the training data. The standard approach to this problem is to build a small parallel data in the target domain and perform domain adaptation from a source domain where massive parallel data is available. However, domain", "after": "network methods exhibit strong performance only in a few resource-rich domains. Practitioners, therefore, employ domain", "start_char_pos": 7, "end_char_pos": 303}, {"type": "A", "before": null, "after": "from resource-rich domains that are, in most cases, distant from the target domain. Domain adaptation", "start_char_pos": 315, "end_char_pos": 315}, {"type": "A", "before": null, "after": "movie", "start_char_pos": 347, "end_char_pos": 347}, {"type": "R", "before": "does not perform effectively because of", "after": ", however, cannot be performed effectively due to", "start_char_pos": 379, "end_char_pos": 418}, {"type": "D", "before": "unknown", "after": null, "start_char_pos": 484, "end_char_pos": 491}, {"type": "R", "before": "`angstrom'", "after": "\"angstrom\"", "start_char_pos": 505, "end_char_pos": 515}, {"type": "R", "before": "`conductor'", "after": "\"conductor\"", "start_char_pos": 570, "end_char_pos": 581}, {"type": "R", "before": "distant domain adaptation", "after": "domain adaptation for neural machine translation (NMT)", "start_char_pos": 647, "end_char_pos": 672}, {"type": "R", "before": "word embeddings in", "after": "the", "start_char_pos": 877, "end_char_pos": 895}, {"type": "D", "before": ",", "after": null, "start_char_pos": 930, "end_char_pos": 931}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 1003, "end_char_pos": 1006}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 1026, "end_char_pos": 1029}, {"type": "D", "before": "on distant domain adaptation for English-to-Japanese translation and German-to-English translation", "after": null, "start_char_pos": 1082, "end_char_pos": 1180}, {"type": "R", "before": "vocabulary adaptation", "after": "method", "start_char_pos": 1199, "end_char_pos": 1220}, {"type": "A", "before": null, "after": "conventional", "start_char_pos": 1249, "end_char_pos": 1249}, {"type": "R", "before": "3.6 BLEU points", "after": "3.86 and 3.28 BLEU points in En-Ja and De-En translation, respectively", "start_char_pos": 1265, "end_char_pos": 1280}], "sents_char_pos": [0, 101, 287, 444, 584, 834, 1060]} {"doc_id": "2004.14870", "revision_depth": "1", "before_revision": "Morphological inflection is a process of word formation where base words are modified to express different grammatical categories such as tense, case, voice, person, or number. World Englishes , such as Colloquial Singapore English (CSE) and African American Vernacular English (AAVE), differ from Standard English dialects in inflection use . Although comprehension by human readers is usually unimpaired by non-standard inflection use, NLP systems are not so robust. We introduce a new Base-Inflection Encoding of English text that is achieved by combining linguistic and statistical techniques . Fine-tuning pre-trained NLP models for downstream tasks under this novel encoding achieves robustness to non-standard inflection use while maintaining performance on Standard English examples . Models using this encoding also generalize better to non-standard dialects without explicit training . We suggest metrics to evaluate tokenizers and extensive model-independent analyses demonstrate the efficacy of the encoding when used together with data-driven subword tokenizers .", "after_revision": "Inflectional variation is a common feature of World Englishes such as Colloquial Singapore English and African American Vernacular English . Although comprehension by human readers is usually unimpaired by non-standard inflections, current NLP systems are not yet robust. We propose Base-Inflection Encoding (BITE), a method to tokenize English text by reducing inflected words to their base forms before reinjecting the grammatical information as special symbols . Fine-tuning pretrained NLP models for downstream tasks using our encoding defends against inflectional adversaries while maintaining performance on clean data . Models using BITE generalize better to dialects with non-standard inflections without explicit training and translation models converge faster when trained with BITE. Finally, we show that our encoding improves the vocabulary efficiency of popular data-driven subword tokenizers . Since there has been no prior work on quantitatively evaluating vocabulary efficiency, we propose metrics to do so .", "edit_actions": [{"type": "R", "before": "Morphological inflection is a process of word formation where base words are modified to express different grammatical categories such as tense, case, voice, person, or number. World Englishes ,", "after": "Inflectional variation is a common feature of World Englishes", "start_char_pos": 0, "end_char_pos": 194}, {"type": "D", "before": "(CSE)", "after": null, "start_char_pos": 232, "end_char_pos": 237}, {"type": "D", "before": "(AAVE), differ from Standard English dialects in inflection use", "after": null, "start_char_pos": 278, "end_char_pos": 341}, {"type": "R", "before": "inflection use,", "after": "inflections, current", "start_char_pos": 422, "end_char_pos": 437}, {"type": "R", "before": "so", "after": "yet", "start_char_pos": 458, "end_char_pos": 460}, {"type": "R", "before": "introduce a new", "after": "propose", "start_char_pos": 472, "end_char_pos": 487}, {"type": "R", "before": "of English text that is achieved by combining linguistic and statistical techniques", "after": "(BITE), a method to tokenize English text by reducing inflected words to their base forms before reinjecting the grammatical information as special symbols", "start_char_pos": 513, "end_char_pos": 596}, {"type": "R", "before": "pre-trained", "after": "pretrained", "start_char_pos": 611, "end_char_pos": 622}, {"type": "R", "before": "under this novel encoding achieves robustness to non-standard inflection use", "after": "using our encoding defends against inflectional adversaries", "start_char_pos": 655, "end_char_pos": 731}, {"type": "R", "before": "Standard English examples", "after": "clean data", "start_char_pos": 765, "end_char_pos": 790}, {"type": "R", "before": "this encoding also", "after": "BITE", "start_char_pos": 806, "end_char_pos": 824}, {"type": "R", "before": "non-standard dialects", "after": "dialects with non-standard inflections", "start_char_pos": 846, "end_char_pos": 867}, {"type": "R", "before": ". We suggest metrics to evaluate tokenizers and extensive model-independent analyses demonstrate the efficacy of the encoding when used together with", "after": "and translation models converge faster when trained with BITE. Finally, we show that our encoding improves the vocabulary efficiency of popular", "start_char_pos": 894, "end_char_pos": 1043}, {"type": "A", "before": null, "after": ". Since there has been no prior work on quantitatively evaluating vocabulary efficiency, we propose metrics to do so", "start_char_pos": 1075, "end_char_pos": 1075}], "sents_char_pos": [0, 176, 343, 468, 598, 792, 895]} {"doc_id": "2004.15003", "revision_depth": "2", "before_revision": "One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish the norm and direction , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance. Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ; this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method . On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines. ", "after_revision": "A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To address this issue , we focus on and demonstrate the fact that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity. Alignment-based approaches do not distinguish them , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose a method that first decouples word vectors into their norm and direction , and then computes alignment-based similarity using earth mover's distance ( i.e., optimal transport cost), which we refer to as word rotator's distance. Besides, we find how to grow the norm and direction of word vectors (vector converter) , which is a new systematic approach derived from sentence-vector estimation methods . On several textual similarity datasets, the combination of these simple proposed methods outperformed not only alignment-based approaches but also strong baselines. The source code is available at URL", "edit_actions": [{"type": "R", "before": "One key principle for", "after": "A key principle in", "start_char_pos": 0, "end_char_pos": 21}, {"type": "D", "before": "both", "after": null, "start_char_pos": 184, "end_char_pos": 188}, {"type": "R", "before": "remedy this", "after": "address this issue", "start_char_pos": 334, "end_char_pos": 345}, {"type": "A", "before": null, "after": "and demonstrate", "start_char_pos": 360, "end_char_pos": 360}, {"type": "R", "before": "the angle of them", "after": "their angle", "start_char_pos": 441, "end_char_pos": 458}, {"type": "R", "before": "the norm and direction", "after": "them", "start_char_pos": 542, "end_char_pos": 564}, {"type": "R", "before": "to decouple", "after": "a method that first decouples", "start_char_pos": 677, "end_char_pos": 688}, {"type": "R", "before": "then computing the", "after": ", and then computes", "start_char_pos": 732, "end_char_pos": 750}, {"type": "A", "before": null, "after": "i.e.,", "start_char_pos": 809, "end_char_pos": 809}, {"type": "R", "before": "Furthermore, we demonstrate", "after": "Besides, we find", "start_char_pos": 881, "end_char_pos": 908}, {"type": "R", "before": "; this", "after": ", which", "start_char_pos": 979, "end_char_pos": 985}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1028, "end_char_pos": 1031}, {"type": "D", "before": ", which can significantly improve the performance of the proposed method", "after": null, "start_char_pos": 1067, "end_char_pos": 1139}, {"type": "R", "before": "STS benchmarks, our", "after": "textual similarity datasets, the combination of these", "start_char_pos": 1153, "end_char_pos": 1172}, {"type": "A", "before": null, "after": "The source code is available at URL", "start_char_pos": 1273, "end_char_pos": 1273}], "sents_char_pos": [0, 147, 217, 330, 495, 652, 880, 980, 1141]} {"doc_id": "2004.15020", "revision_depth": "1", "before_revision": "Image captioning datasets have proven useful for multimodal representation learning, and a common evaluation paradigm based on multimodal retrieval has emerged . Unfortunately, datasets have only limited cross-modal associations: images are not paired with others , captions are only paired with others that describe the same image, there are no negative associations and there are missing positive cross-modal associations. This undermines retrieval evaluation and limits research into how inter-modality learning impacts intra-modality tasks. To address this gap , we create theCrisscrossed Captions (CxC) dataset, extending MS-COCO with new semantic similarity judgments for \\textbf{247,315 intra- and inter-modality pairs. We provide baseline model performance results for both retrieval and correlations with human rankings, emphasizing both intra- and inter-modality learning.", "after_revision": "By supporting multi-modal retrieval training and evaluation, image captioning datasets have spurred remarkable progress on representation learning . Unfortunately, datasets have limited cross-modal associations: images are not paired with other images , captions are only paired with other captions of the same image, there are no negative associations and there are missing positive cross-modal associations. This undermines research into how inter-modality learning impacts intra-modality tasks. We address this gap with Crisscrossed Captions (CxC) , an extension of the MS-COCO dataset with human semantic similarity judgments for \\textbf{267,095 intra- and inter-modality pairs. We report baseline results on CxC for strong existing unimodal and multimodal models. We also evaluate a multitask dual encoder trained on both image-caption and caption-caption pairs that crucially demonstrates CxC's value for measuring the influence of intra- and inter-modality learning.", "edit_actions": [{"type": "R", "before": "Image", "after": "By supporting multi-modal retrieval training and evaluation, image", "start_char_pos": 0, "end_char_pos": 5}, {"type": "R", "before": "proven useful for multimodal representation learning, and a common evaluation paradigm based on multimodal retrieval has emerged", "after": "spurred remarkable progress on representation learning", "start_char_pos": 31, "end_char_pos": 159}, {"type": "D", "before": "only", "after": null, "start_char_pos": 191, "end_char_pos": 195}, {"type": "R", "before": "others", "after": "other images", "start_char_pos": 257, "end_char_pos": 263}, {"type": "R", "before": "others that describe", "after": "other captions of", "start_char_pos": 296, "end_char_pos": 316}, {"type": "D", "before": "retrieval evaluation and limits", "after": null, "start_char_pos": 441, "end_char_pos": 472}, {"type": "R", "before": "To", "after": "We", "start_char_pos": 545, "end_char_pos": 547}, {"type": "D", "before": ", we create the", "after": null, "start_char_pos": 565, "end_char_pos": 580}, {"type": "R", "before": "Crisscrossed Captions", "after": "with Crisscrossed Captions", "start_char_pos": 580, "end_char_pos": 601}, {"type": "R", "before": "dataset, extending", "after": ", an extension of the", "start_char_pos": 608, "end_char_pos": 626}, {"type": "R", "before": "with new", "after": "dataset with human", "start_char_pos": 635, "end_char_pos": 643}, {"type": "R", "before": "247,315", "after": "267,095", "start_char_pos": 686, "end_char_pos": 693}, {"type": "R", "before": "provide baseline model performance results for both retrieval and correlations with human rankings, emphasizing both", "after": "report baseline results on CxC for strong existing unimodal and multimodal models. We also evaluate a multitask dual encoder trained on both image-caption and caption-caption pairs that crucially demonstrates CxC's value for measuring the influence of", "start_char_pos": 730, "end_char_pos": 846}], "sents_char_pos": [0, 161, 424, 544, 726]} {"doc_id": "2005.00081", "revision_depth": "1", "before_revision": "Given a user-specified minimum degree threshold \\gamma , a%DIFDELCMD < {%%% \\gamma -quasi-clique is a subgraph where each vertex connects to at least \\gamma fraction of the other vertices . Mining maximal\\lceil \\rceil quasi-cliques is notoriously expensive with the state-of-the-art algorithm scaling only to small graphs with thousands of vertices.This has hampered its popularity in real applications involving big graphs. We developed a task-based system called G-thinker for massively parallel graph mining, which is the first graph mining system that scales with the number of CPU cores. G-thinker provides a unique opportunity to scale the compute-intensive quasi-clique mining . This paperdesigns parallel algorithms for mining maximal quasi-cliques on G-thinker that scale to big graphs. Our algorithms follow the idea of divide and conquer which partitions the problem of mining a big graph into tasks that mine smaller subgraphs . However, we find that a direct application of G-thinker is insufficient due to the drastically different running time of different tasks that violates the original design assumption of G-thinker , requiring a system URLe. We also observe that the running time of a task is highly unpredictable solely from the features extracted from its subgraph, leading to difficulty in pinpoint expensive tasks to decompose for concurrent processing, and size-threshold based partitioning under-partitions some tasks but over-partitions others, leading to bad load balancing and enormous task partitioning overheads. We address this issue by proposing a novel time-delayed divide-and-conquer strategy that strikes a balance between the workloads spent on actual mining and the cost of balancing the workloads. Extensive experiments verify that our G-thinker algorithm scales perfectly with the number of CPU cores, achieving over 300x speedup when running on a graph with over 1M vertices in a small cluster.", "after_revision": "Given a user-specified minimum degree threshold %DIFDELCMD < {%%% \\gamma, a \\gamma -quasi-clique is a subgraph g=(V_g,E_g) where each vertex v\\in V_g connects to at least \\gamma fraction of the other vertices (i.e.,\\lceil \\gamma\\cdot(|V_g|-1)\\rceil vertices) in g. Quasi-clique is one of the most natural definitions for dense structures useful in finding communities in social networks and discovering significant biomolecule structures and pathways. However, mining maximal quasi-cliques is notoriously expensive. In this paper, we design parallel algorithms for mining maximal quasi-cliques on G-thinker , a recent distributed framework targeting divide-and-conquer graph mining algorithms that decomposes the mining into compute-intensive tasks to fully utilize CPU cores . However, we found that directly using G-thinker results in the straggler problem due to (i) the drastic load imbalance among different tasks and (ii) the difficulty of predicting the task running time and the time growth with task-subgraph size. We address these challenges by redesigning G-thinker 's execution engine to prioritize long-running tasks for mining, and by utilizing a novel timeout strategy to effectively decompose the mining workloads of long-running tasks to improve load balancing. While this system redesign applies to many other expensive dense subgraph mining problems, this paper verifies the idea by adapting the state-of-the-art quasi-clique algorithm, Quick, to our redesigned G-thinker. We improve Quick by integrating new pruning rules, and fixing some missed boundary cases that could lead to missed results. Extensive experiments verify that our new solution scales well with the number of CPU cores, achieving 201\\times runtime speedup when mining a graph with 3.77M vertices and 16.5M edges in a 16-node cluster.", "edit_actions": [{"type": "D", "before": "\\gamma", "after": null, "start_char_pos": 48, "end_char_pos": 54}, {"type": "D", "before": ", a", "after": null, "start_char_pos": 55, "end_char_pos": 58}, {"type": "R", "before": "\\gamma", "after": "\\gamma, a \\gamma", "start_char_pos": 76, "end_char_pos": 82}, {"type": "A", "before": null, "after": "g=(V_g,E_g)", "start_char_pos": 111, "end_char_pos": 111}, {"type": "A", "before": null, "after": "v\\in V_g", "start_char_pos": 130, "end_char_pos": 130}, {"type": "R", "before": "\\gamma", "after": "\\gamma", "start_char_pos": 152, "end_char_pos": 158}, {"type": "R", "before": ". Mining maximal", "after": "(i.e.,", "start_char_pos": 190, "end_char_pos": 206}, {"type": "A", "before": null, "after": "\\gamma\\cdot(|V_g|-1)", "start_char_pos": 213, "end_char_pos": 213}, {"type": "A", "before": null, "after": "vertices) in g. Quasi-clique is one of the most natural definitions for dense structures useful in finding communities in social networks and discovering significant biomolecule structures and pathways. However, mining maximal", "start_char_pos": 220, "end_char_pos": 220}, {"type": "R", "before": "is notoriously expensive with the state-of-the-art algorithm scaling only to small graphs with thousands of vertices.This has hampered its popularity in real applications involving big graphs. We developed a task-based system called G-thinker for massively parallel graph mining, which is the first graph mining system that scales with the number of CPU cores. G-thinker provides a unique opportunity to scale the compute-intensive quasi-clique mining . This paperdesigns", "after": "is notoriously expensive. In this paper, we design", "start_char_pos": 235, "end_char_pos": 706}, {"type": "R", "before": "that scale to big graphs. Our algorithms follow the idea of divide and conquer which partitions the problem of mining a big graph into tasks that mine smaller subgraphs", "after": ", a recent distributed framework targeting divide-and-conquer graph mining algorithms that decomposes the mining into compute-intensive tasks to fully utilize CPU cores", "start_char_pos": 773, "end_char_pos": 941}, {"type": "R", "before": "find that a direct application of", "after": "found that directly using", "start_char_pos": 956, "end_char_pos": 989}, {"type": "R", "before": "is insufficient due to the drastically different running time of different tasks that violates the original design assumption of", "after": "results in the straggler problem due to (i) the drastic load imbalance among different tasks and (ii) the difficulty of predicting the task running time and the time growth with task-subgraph size. We address these challenges by redesigning", "start_char_pos": 1000, "end_char_pos": 1128}, {"type": "R", "before": ", requiring a system URLe. We also observe that the running time of a task is highly unpredictable solely from the features extracted from its subgraph, leading to difficulty in pinpoint expensive tasks to decompose for concurrent processing, and size-threshold based partitioning under-partitions some tasks but over-partitions others, leading to bad load balancing and enormous task partitioning overheads. We address this issue by proposing a novel time-delayed divide-and-conquer strategy that strikes a balance between the workloads spent on actual mining and the cost of balancing the workloads.", "after": "'s execution engine to prioritize long-running tasks for mining, and by utilizing a novel timeout strategy to effectively decompose the mining workloads of long-running tasks to improve load balancing. While this system redesign applies to many other expensive dense subgraph mining problems, this paper verifies the idea by adapting the state-of-the-art quasi-clique algorithm, Quick, to our redesigned G-thinker. We improve Quick by integrating new pruning rules, and fixing some missed boundary cases that could lead to missed results.", "start_char_pos": 1139, "end_char_pos": 1740}, {"type": "R", "before": "G-thinker algorithm scales perfectly", "after": "new solution scales well", "start_char_pos": 1779, "end_char_pos": 1815}, {"type": "R", "before": "over 300x speedup when running on", "after": "201\\times runtime speedup when mining", "start_char_pos": 1856, "end_char_pos": 1889}, {"type": "R", "before": "over 1M vertices in a small", "after": "3.77M vertices and 16.5M edges in a 16-node", "start_char_pos": 1903, "end_char_pos": 1930}], "sents_char_pos": [0, 191, 352, 427, 595, 688, 798, 943, 1165, 1547, 1740]} {"doc_id": "2005.01282", "revision_depth": "1", "before_revision": "The goal of unconditional text generation is training a model with real sentences, to generate novel sentences which should be the same quality and diversity as the training data. However, when different metrics are used for comparing these methods , the contradictory conclusions are drawn. The difficulty is that both the sample diversity and the sample quality should be taken into account simultaneously , when a generative model is evaluated. To solve this issue , a novel metric of distributional discrepancy (DD) is designed to evaluate generators according to the discrepancy between the generated sentences and the real training sentences. But, a challenge is that it can't compute DD directly because the distribution of real sentences is unavailable. Thus, we propose a method to estimate DD by training a neural-network-based text classifier. For comparison, three existing metrics, Bilingual Evaluation Understudy (BLEU) verse self-BLEU, language model score verse reverse language model score, Fr'chet Embedding Distance (FED), together with the proposed DD, are used to evaluate two popular generative models of LSTM and GPT-2 on both syntactic and real data. Experimental results show DD is much better than the three existing metrics in ranking these generative models.", "after_revision": "The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data. However, when different metrics are used for comparing the methods of unconditional text generation, contradictory conclusions are drawn. The difficulty is that both the diversity and quality of the sample should be considered simultaneously when the models are evaluated. To solve this problem , a novel metric of distributional discrepancy (DD) is designed to evaluate generators based on the discrepancy between the generated and real training sentences. However, it cannot compute the DD directly because the distribution of real sentences is unavailable. Thus, we propose a method for estimating the DD by training a neural-network-based text classifier. For comparison, three existing metrics, bi-lingual evaluation understudy (BLEU) versus self-BLEU, language model score versus reverse language model score, and Fr\\'{e with the proposed DD, are used to evaluate two popular generative models of long short-term memory and generative pretrained transformer 2 on both syntactic and real data. Experimental results show that DD is significantly better than the three existing metrics for ranking these generative models.", "edit_actions": [{"type": "R", "before": "goal", "after": "purpose", "start_char_pos": 4, "end_char_pos": 8}, {"type": "R", "before": "training", "after": "to train", "start_char_pos": 45, "end_char_pos": 53}, {"type": "R", "before": "to", "after": "then", "start_char_pos": 83, "end_char_pos": 85}, {"type": "R", "before": "which should be", "after": "of", "start_char_pos": 111, "end_char_pos": 126}, {"type": "R", "before": "these methods , the", "after": "the methods of unconditional text generation,", "start_char_pos": 235, "end_char_pos": 254}, {"type": "R", "before": "sample diversity and the sample quality should be taken into account simultaneously , when a generative model is", "after": "diversity and quality of the sample should be considered simultaneously when the models are", "start_char_pos": 324, "end_char_pos": 436}, {"type": "R", "before": "issue", "after": "problem", "start_char_pos": 462, "end_char_pos": 467}, {"type": "R", "before": "according to", "after": "based on", "start_char_pos": 555, "end_char_pos": 567}, {"type": "R", "before": "sentences and the", "after": "and", "start_char_pos": 606, "end_char_pos": 623}, {"type": "R", "before": "But, a challenge is that it can't compute", "after": "However, it cannot compute the", "start_char_pos": 649, "end_char_pos": 690}, {"type": "R", "before": "to estimate", "after": "for estimating the", "start_char_pos": 788, "end_char_pos": 799}, {"type": "R", "before": "Bilingual Evaluation Understudy", "after": "bi-lingual evaluation understudy", "start_char_pos": 895, "end_char_pos": 926}, {"type": "R", "before": "verse", "after": "versus", "start_char_pos": 934, "end_char_pos": 939}, {"type": "R", "before": "verse", "after": "versus", "start_char_pos": 972, "end_char_pos": 977}, {"type": "R", "before": "Fr'chet Embedding Distance (FED), together", "after": "and Fr\\'{e", "start_char_pos": 1008, "end_char_pos": 1050}, {"type": "R", "before": "LSTM and GPT-2", "after": "long short-term memory and generative pretrained transformer 2", "start_char_pos": 1127, "end_char_pos": 1141}, {"type": "R", "before": "DD is much", "after": "that DD is significantly", "start_char_pos": 1201, "end_char_pos": 1211}, {"type": "R", "before": "in", "after": "for", "start_char_pos": 1251, "end_char_pos": 1253}], "sents_char_pos": [0, 179, 291, 447, 648, 761, 854, 1174]} {"doc_id": "2005.03459", "revision_depth": "1", "before_revision": "Real-world application scenarios like modern Internet services consist of diversity of AI and non-AI modules with very long and complex execution paths . Using component or micro AI benchmarks alone can lead to error-prone conclusions. This paper proposes a scenario-distilling AI benchmarking methodology . Instead of using real-world applications, we propose the permutations of essential AI and non-AI tasks as a scenario-distilling benchmark . We consider scenario-distilling benchmarks, component and micro benchmarks as three indispensable parts of a benchmark suite. Together with seventeen industry partners, we identify nine important real-world application scenarios . We design and implement a highly extensible, configurable, and flexible benchmark framework . On the basis of the framework, we propose the guideline for building scenario-distilling benchmarks, and present two Internet service AI ones. The preliminary evaluation shows the advantage of scenario-distilling AI benchmarking against using component or micro AI benchmarks alone. The specifications, source code, testbed, and results are publicly available from the web site URL/ AIBench /index.html}.", "after_revision": "Modern real-world application scenarios like Internet services not only consist of diversity of AI and non-AI modules with very long and complex execution paths , but also have huge code size, which raises serious benchmarking or evaluating challenges. Using AI components or micro benchmarks alone can lead to error-prone conclusions. This paper presents a scenario-distilling methodology to attack the above challenge. We formalize a real-world application scenario as a Directed Acyclic Graph-based model, and propose the rules to distill it into the permutation of essential AI and non-AI tasks as a high-level scenario benchmark specification. Together with seventeen industry partners, we extract nine typical application scenarios, and identify the primary components . We design and implement a highly extensible, configurable, and flexible benchmark framework , on the basis of which, we implement two Internet service AI scenario benchmarks as proxies to two real-world application scenarios. We claim scenario, component and micro benchmarks should be considered as three indispensable parts for evaluating. Our evaluation shows the advantage of our methodology against using component or micro AI benchmarks alone. The specifications, source code, testbed, and results are publicly available from URL/ aibench-scenario /index.html}.", "edit_actions": [{"type": "R", "before": "Real-world", "after": "Modern real-world", "start_char_pos": 0, "end_char_pos": 10}, {"type": "R", "before": "modern Internet services", "after": "Internet services not only", "start_char_pos": 38, "end_char_pos": 62}, {"type": "R", "before": ". Using component or micro AI", "after": ", but also have huge code size, which raises serious benchmarking or evaluating challenges. Using AI components or micro", "start_char_pos": 152, "end_char_pos": 181}, {"type": "R", "before": "proposes", "after": "presents", "start_char_pos": 247, "end_char_pos": 255}, {"type": "R", "before": "AI benchmarking methodology . Instead of using", "after": "methodology to attack the above challenge. We formalize a", "start_char_pos": 278, "end_char_pos": 324}, {"type": "R", "before": "applications, we propose the permutations", "after": "application scenario as a Directed Acyclic Graph-based model, and propose the rules to distill it into the permutation", "start_char_pos": 336, "end_char_pos": 377}, {"type": "R", "before": "scenario-distilling benchmark . We consider scenario-distilling benchmarks, component and micro benchmarks as three indispensable parts of a benchmark suite.", "after": "high-level scenario benchmark specification.", "start_char_pos": 416, "end_char_pos": 573}, {"type": "R", "before": "identify nine important real-world application scenarios", "after": "extract nine typical application scenarios, and identify the primary components", "start_char_pos": 620, "end_char_pos": 676}, {"type": "R", "before": ". On", "after": ", on", "start_char_pos": 771, "end_char_pos": 775}, {"type": "R", "before": "the framework, we propose the guideline for building scenario-distilling benchmarks, and present", "after": "which, we implement", "start_char_pos": 789, "end_char_pos": 885}, {"type": "R", "before": "ones. The preliminary", "after": "scenario benchmarks as proxies to two real-world application scenarios. We claim scenario, component and micro benchmarks should be considered as three indispensable parts for evaluating. Our", "start_char_pos": 910, "end_char_pos": 931}, {"type": "R", "before": "scenario-distilling AI benchmarking", "after": "our methodology", "start_char_pos": 966, "end_char_pos": 1001}, {"type": "D", "before": "the web site", "after": null, "start_char_pos": 1138, "end_char_pos": 1150}, {"type": "R", "before": "AIBench", "after": "aibench-scenario", "start_char_pos": 1156, "end_char_pos": 1163}], "sents_char_pos": [0, 235, 307, 447, 573, 678, 772, 915, 1055]} {"doc_id": "2005.03651", "revision_depth": "1", "before_revision": "We analyze risk factors correlated with the initial transmission growth rate of the COVID-19 pandemic . The number of cases follows an early exponential expansion; we chose as a starting point in each country the first day with 30 cases and used 12 days . We looked for linear correlations of the exponents with other variables, using 126 countries. We find a positive correlation with high C.L. with{\\it the following variables, with respective p-value: low Temperature (4\\cdot10^{-7}), high ratio of old vs.~working-age people (3\\cdot10^{-6}), life expectancy (8\\cdot10^{-6}), number of international tourists (1\\cdot10^{-5}), earlier epidemic starting date (2\\cdot10^{-5}), high level of contact in greeting habits (6 \\cdot 10^{-5}), lung cancer (6 \\cdot 10^{-5}), obesity in males (1 \\cdot 10^{-4}), urbanization (2\\cdot10^{-4}), cancer prevalence (3 \\cdot 10^{-4}), alcohol consumption (0.0019), daily smoking prevalence (0.0036), UV index (0.004, smaller sample, 73 countries) , low Vitamin D levels ( p-value 0.002-0.006, smaller sample, \\sim 50 countries ). There is highly significant correlation also with blood type : positive correlation with RH- ( 2 \\cdot10^{-5}) and A+ ( 2 \\cdot10^{-3}), negative correlation with B+ (2\\cdot10^{-4}). We also find positive correlation with moderate C.L. (p-value of 0.02%DIFDELCMD < \\sim0%%% .03) with: CO_2 emissions, type-1 diabetes, low vaccination coverage for Tuberculosis (BCG). Several such variables are correlated with each other and so they likely have common interpretations. We also analyzed the possible existence of a bias: countries with low GDP-per capita , typically located in warm regions, might have less intense testing and we discuss correlation with the above variables.", "after_revision": "We analyze risk factors correlated with the initial transmission growth rate of the recent COVID-19 pandemic in different countries . The number of cases follows in its early stages an almost exponential expansion; we chose as a starting point in each country the first day d_i with 30 cases and we fitted for 12 days , capturing thus the early exponential growth . We looked then for linear correlations of the exponents \\alpha with other variables, for a sample of 126 countries. We find a positive correlation ,{\\it i.e. faster spread of COVID-19 , with high confidence level with the following variables, with respective p-value: low Temperature (4\\cdot10^{-7}), high ratio of old vs.~working-age people (3\\cdot10^{-6}), life expectancy (8\\cdot10^{-6}), number of international tourists (1\\cdot10^{-5}), earlier epidemic starting date d_i (2\\cdot10^{-5}), high level of physical contact in greeting habits (6 \\cdot 10^{-5}), lung cancer prevalence (6 \\cdot 10^{-5}), obesity in males (1 \\cdot 10^{-4}), share of population in urban areas (2\\cdot10^{-4}), cancer prevalence (3 \\cdot 10^{-4}), alcohol consumption (0.0019), daily smoking prevalence (0.0036), UV index (0.004, 73 countries) . We also find a correlation with low Vitamin D levels ( 0.002-0.006, smaller sample, \\sim 50 countries , to be confirmed on a larger sample ). There is highly significant correlation also with blood types : positive correlation with types RH- ( 3 \\cdot10^{-5}) and A+ ( 3 \\cdot10^{-3}), negative correlation with B+ (2\\cdot10^{-4}). %DIFDELCMD < \\sim0%%% Several of the above variables are intercorrelated and likely to have common interpretations. We performed a Principal Component Analysis, in order to find their significant independent linear combinations. We also analyzed a possible bias: countries with low GDP-per capita might have less testing and we discuss correlation with the above variables.", "edit_actions": [{"type": "A", "before": null, "after": "recent", "start_char_pos": 84, "end_char_pos": 84}, {"type": "A", "before": null, "after": "in different countries", "start_char_pos": 103, "end_char_pos": 103}, {"type": "R", "before": "an early", "after": "in its early stages an almost", "start_char_pos": 134, "end_char_pos": 142}, {"type": "A", "before": null, "after": "d_i", "start_char_pos": 225, "end_char_pos": 225}, {"type": "R", "before": "used", "after": "we fitted for", "start_char_pos": 244, "end_char_pos": 248}, {"type": "A", "before": null, "after": ", capturing thus the early exponential growth", "start_char_pos": 257, "end_char_pos": 257}, {"type": "A", "before": null, "after": "then", "start_char_pos": 270, "end_char_pos": 270}, {"type": "A", "before": null, "after": "\\alpha", "start_char_pos": 312, "end_char_pos": 312}, {"type": "R", "before": "using", "after": "for a sample of", "start_char_pos": 335, "end_char_pos": 340}, {"type": "R", "before": "with high C.L. with", "after": ",", "start_char_pos": 387, "end_char_pos": 406}, {"type": "A", "before": null, "after": "i.e. faster spread of COVID-19", "start_char_pos": 411, "end_char_pos": 411}, {"type": "A", "before": null, "after": ", with high confidence level with", "start_char_pos": 412, "end_char_pos": 412}, {"type": "A", "before": null, "after": "d_i", "start_char_pos": 668, "end_char_pos": 668}, {"type": "A", "before": null, "after": "physical", "start_char_pos": 700, "end_char_pos": 700}, {"type": "A", "before": null, "after": "prevalence", "start_char_pos": 759, "end_char_pos": 759}, {"type": "R", "before": "urbanization", "after": "share of population in urban areas", "start_char_pos": 815, "end_char_pos": 827}, {"type": "D", "before": "smaller sample,", "after": null, "start_char_pos": 964, "end_char_pos": 979}, {"type": "R", "before": ",", "after": ". We also find a correlation with", "start_char_pos": 994, "end_char_pos": 995}, {"type": "D", "before": "p-value", "after": null, "start_char_pos": 1019, "end_char_pos": 1026}, {"type": "A", "before": null, "after": ", to be confirmed on a larger sample", "start_char_pos": 1074, "end_char_pos": 1074}, {"type": "R", "before": "type", "after": "types", "start_char_pos": 1134, "end_char_pos": 1138}, {"type": "A", "before": null, "after": "types", "start_char_pos": 1167, "end_char_pos": 1167}, {"type": "R", "before": "2", "after": "3", "start_char_pos": 1174, "end_char_pos": 1175}, {"type": "R", "before": "2", "after": "3", "start_char_pos": 1199, "end_char_pos": 1200}, {"type": "D", "before": "We also find positive correlation with moderate C.L. (p-value of 0.02", "after": null, "start_char_pos": 1262, "end_char_pos": 1331}, {"type": "R", "before": ".03) with: CO_2 emissions, type-1 diabetes, low vaccination coverage for Tuberculosis (BCG). Several such variables are correlated with each other and so they likely", "after": "Several of the above variables are intercorrelated and likely to", "start_char_pos": 1353, "end_char_pos": 1518}, {"type": "R", "before": "also analyzed the possible existence of a", "after": "performed a Principal Component Analysis, in order to find their significant independent linear combinations. We also analyzed a possible", "start_char_pos": 1551, "end_char_pos": 1592}, {"type": "D", "before": ", typically located in warm regions,", "after": null, "start_char_pos": 1633, "end_char_pos": 1669}, {"type": "D", "before": "intense", "after": null, "start_char_pos": 1686, "end_char_pos": 1693}], "sents_char_pos": [0, 105, 165, 259, 355, 462, 495, 553, 993, 1077, 1261, 1445, 1547]} {"doc_id": "2005.05298", "revision_depth": "2", "before_revision": "This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model. We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion. The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching . Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost . We will release our code and pre-trained models for reproducible research.", "after_revision": "We present a new method SOLOIST that uses transfer learning and machine teaching to build task bots at scale. We parameterize classical modular task-oriented dialog systems using a Transformer-based auto-regressive language model, which subsumes different dialog modules into a single neural model. We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model, which can generate dialog responses grounded in user goals and real-world knowledge for task completion. The pre-trained model can be efficiently adapted to accomplish new tasks with a handful of task-specific dialogs via machine teaching , where training samples are generated by human teachers interacting with the system. Experiments show that (i) SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks, including CamRest676 and MultiWOZ; (ii) in the few-shot fine-tuning settings, SOLOIST significantly outperforms existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost of fine-tuning. The pre-trained models and codes are available at URL", "edit_actions": [{"type": "R", "before": "This paper presents", "after": "We present", "start_char_pos": 0, "end_char_pos": 19}, {"type": "R", "before": ", which", "after": "that", "start_char_pos": 41, "end_char_pos": 48}, {"type": "R", "before": "to efficiently build task-oriented dialog systems", "after": "and machine teaching to build task bots", "start_char_pos": 72, "end_char_pos": 121}, {"type": "R", "before": "a dialog system", "after": "classical modular task-oriented dialog systems", "start_char_pos": 148, "end_char_pos": 163}, {"type": "D", "before": "(e.g., state tracker, dialog policy, response generator)", "after": null, "start_char_pos": 262, "end_char_pos": 318}, {"type": "D", "before": "large", "after": null, "start_char_pos": 364, "end_char_pos": 369}, {"type": "R", "before": "large-scale Transformer model", "after": "task-grounded response generation model,", "start_char_pos": 402, "end_char_pos": 431}, {"type": "R", "before": "a new dialog task", "after": "new tasks", "start_char_pos": 600, "end_char_pos": 617}, {"type": "R", "before": ". Our experiments demonstrate", "after": ", where training samples are generated by human teachers interacting with the system. Experiments show", "start_char_pos": 679, "end_char_pos": 708}, {"type": "R", "before": "results on two well-known benchmarks, CamRest and MultiWOZ,", "after": "on well-studied task-oriented dialog benchmarks, including CamRest676 and MultiWOZ;", "start_char_pos": 755, "end_char_pos": 814}, {"type": "R", "before": "learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by", "after": "fine-tuning settings, SOLOIST significantly outperforms", "start_char_pos": 836, "end_char_pos": 937}, {"type": "R", "before": ". We will release our code and", "after": "of fine-tuning. The", "start_char_pos": 1034, "end_char_pos": 1064}, {"type": "R", "before": "for reproducible research.", "after": "and codes are available at URL", "start_char_pos": 1084, "end_char_pos": 1110}], "sents_char_pos": [0, 131, 346, 536, 680, 1035]} {"doc_id": "2005.06297", "revision_depth": "1", "before_revision": "We study the epidemic patterns of the COVID-19 virus in Argentinafrom a mathematical modelling perspective . We implement an SEIR model , consisting of a set of first-order temporal differential equations (ODE's) to analyze the time evolution of the disease caused by the virus. The model is applied to the city of Buenos Aires and neighbouring cities (RMBA) with approximately 15 million inhabitants. The parameters of the model are calibrated by using as data the number of casualties officially reported. Since there are infinite solutions honouring the data, we show a set of cases by considering different situations. The first set of parameters yields initially a reproduction ratio R0 = 3.30 decreasing to 0.92 in April 8, after the lockdown, but increasing to 2.44 after April 27, most probably due to an increase of the contagion in highly populated slums. This case has incubation and infection periods of 11 and 7 days, approximately, and about 13 million people infected at the end of the epidemic. The infection fatality rate (IFR) is 1.88 \\% and the predicted number of casualties is approximately 249000 deaths at the end of the epidemic. However, this death toll is highly affected by the evolution of the reproduction ratio, and keeping R0 = 0.92 after April 27, would cause 1321 casualties and only 66000 infected individuals. Other cases, assuming the present trend, predict smaller incubation periods ( between 4 and 5 days) and yield between 30000 and 90000 deaths and IFRs between 0.5 \\% and 1 \\%. This means that the intensity of the lockdown (and behaviour of the population) is essential and that the measures have to be guided by precise model predictions. We also consider doubling the number of casualties to date and, in this case, the death toll is almost 44000 individuals and about 5.1 million people being infected .", "after_revision": "A pandemic caused by a new coronavirus has spread worldwide, causing an epidemic in Argentina . We implement an SEIR model to analyze the evolution of the disease in Buenos Aires and neighbouring cities (RMBA) with 15 million inhabitants. The parameters of the model are calibrated by using as data the number of casualties officially reported. Since infinite solutions honour the data, we show a set of cases by considering different situations. The first set of parameters yields initially a reproduction ratio R0 = 3.33 decreasing to 0.95 in April 8, after the lockdown, but increasing to 1.55 after April 27, most probably due to an increase of the contagion in highly populated slums. The infection fatality rate (IFR) is 1.88 \\% and the predicted number of casualties is 173000 deaths with 9 million people infected at the end of the epidemic. However, keeping R_0 = 0.95 after April 27, would cause only 1881 casualties and 92955 infected individuals. Other cases, assuming the present trend, predict smaller incubation periods ( 4-5 days) and yield between 20000 and 70000 deaths and IFRs between 0.5 \\% and 1.1 \\%. We also consider doubling the number of casualties , with a death toll of 44000 individuals and 5.1 million infected individuals. Other choices of parameters also provide a good fit of the data, indicating the uncertainty of the results, which may differ from reported values. The analysis allows us to study how isolation and social distancing measures affect the time evolution of the epidemic .", "edit_actions": [{"type": "R", "before": "We study the epidemic patterns of the COVID-19 virus in Argentinafrom a mathematical modelling perspective", "after": "A pandemic caused by a new coronavirus has spread worldwide, causing an epidemic in Argentina", "start_char_pos": 0, "end_char_pos": 106}, {"type": "D", "before": ", consisting of a set of first-order temporal differential equations (ODE's)", "after": null, "start_char_pos": 136, "end_char_pos": 212}, {"type": "D", "before": "time", "after": null, "start_char_pos": 228, "end_char_pos": 232}, {"type": "R", "before": "caused by the virus. The model is applied to the city of", "after": "in", "start_char_pos": 258, "end_char_pos": 314}, {"type": "D", "before": "approximately", "after": null, "start_char_pos": 364, "end_char_pos": 377}, {"type": "R", "before": "there are infinite solutions honouring", "after": "infinite solutions honour", "start_char_pos": 514, "end_char_pos": 552}, {"type": "R", "before": "3.30 decreasing to 0.92", "after": "3.33 decreasing to 0.95", "start_char_pos": 694, "end_char_pos": 717}, {"type": "R", "before": "2.44", "after": "1.55", "start_char_pos": 768, "end_char_pos": 772}, {"type": "D", "before": "This case has incubation and infection periods of 11 and 7 days, approximately, and about 13 million people infected at the end of the epidemic.", "after": null, "start_char_pos": 866, "end_char_pos": 1010}, {"type": "R", "before": "approximately 249000 deaths", "after": "173000 deaths with 9 million people infected", "start_char_pos": 1098, "end_char_pos": 1125}, {"type": "R", "before": "this death toll is highly affected by the evolution of the reproduction ratio, and keeping R0", "after": "keeping R_0", "start_char_pos": 1163, "end_char_pos": 1256}, {"type": "R", "before": "0.92", "after": "0.95", "start_char_pos": 1259, "end_char_pos": 1263}, {"type": "D", "before": "1321 casualties and", "after": null, "start_char_pos": 1292, "end_char_pos": 1311}, {"type": "R", "before": "66000", "after": "1881 casualties and 92955", "start_char_pos": 1317, "end_char_pos": 1322}, {"type": "R", "before": "between 4 and 5", "after": "4-5", "start_char_pos": 1423, "end_char_pos": 1438}, {"type": "R", "before": "30000 and 90000", "after": "20000 and 70000", "start_char_pos": 1463, "end_char_pos": 1478}, {"type": "R", "before": "1 \\%. This means that the intensity of the lockdown (and behaviour of the population) is essential and that the measures have to be guided by precise model predictions.", "after": "1.1 \\%.", "start_char_pos": 1514, "end_char_pos": 1682}, {"type": "R", "before": "to date and, in this case, the death toll is almost", "after": ", with a death toll of", "start_char_pos": 1734, "end_char_pos": 1785}, {"type": "D", "before": "about", "after": null, "start_char_pos": 1808, "end_char_pos": 1813}, {"type": "R", "before": "people being infected", "after": "infected individuals. Other choices of parameters also provide a good fit of the data, indicating the uncertainty of the results, which may differ from reported values. The analysis allows us to study how isolation and social distancing measures affect the time evolution of the epidemic", "start_char_pos": 1826, "end_char_pos": 1847}], "sents_char_pos": [0, 108, 278, 401, 507, 622, 865, 1010, 1153, 1344, 1519, 1682]} {"doc_id": "2005.06712", "revision_depth": "1", "before_revision": "Biomolecular condensates underlain by liquid-liquid phase separation (LLPS) of proteins and nucleic acids can serve important biological functions; yet current understanding of the effects of amino acid sequences on LLPS is limited. Endeavoring toward a transferable, predictive coarse-grained explicit-chain model for biomolecular LLPS, we used the N-terminal intrinsically disordered region (IDR) of the DEAD-box helicase Ddx4 as a test case to conduct extensive multiple-chain simulations to assess the roles of electrostatic, hydrophobic, cation-\\pi, and aromatic interactions in sequence-specific phase behaviors. Three different residue-residue interaction schemes sharing the same electrostatic potentialwere evaluated. We found that neither a common scheme based on amino acid hydrophobicity nor one augmented with arginine/lysine-aromatic cation-\\pi interactions can consistently account for the available experimental LLPS data on the wildtype, a charge-scrambled mutant, a phenylalanine-to-alanine (FtoA) mutant and an arginine-to-lysine (RtoK ) mutant of the Ddx4 IDR. In contrast, an interaction scheme based on contact statistics among folded globular protein structures reproduces the overall experimental trend, including that the RtoK mutant has a much diminished LLPS propensity. This finding underscores the important role of \\pi-related interactions in LLPS and that their effects are embodied to a degree in classical statistical potentials. Protein-protein electrostatic interactions are modulated by relative permittivity, which in general depends on protein concentration in the aqueous medium . Analytical theory suggests that this dependence entails enhanced inter-protein interactions in the condensed phase but more favorable protein-solvent interactions in the dilute phase. The opposing trends lead to only a modest overall impact on LLPS.", "after_revision": " Endeavoring toward a transferable, predictive coarse-grained explicit-chain model for biomolecular condensates underlain by liquid-liquid phase separation (LLPS), we conducted multiple-chain simulations of the N-terminal intrinsically disordered region (IDR) of DEAD-box helicase Ddx4 , as a test case , to assess the roles of electrostatic, hydrophobic, cation-\\pi, and aromatic interactions in amino acid sequence-dependent LLPS. We evaluated 3 residue-residue interaction schemes with a shared electrostatic potential. Neither a common hydrophobicity scheme nor one augmented with arginine/lysine-aromatic cation-\\pi interactions consistently accounted for the experimental LLPS data on the wildtype, a charge-scrambled , an FtoA, and an RtoK mutant of Ddx4 IDR. In contrast, interactions based on contact statistics among folded globular protein structures reproduce the overall experimental trend, including that the RtoK mutant has a much diminished LLPS propensity. Consistency between simulation and LLPS experiment was also found for RtoK mutants of P-granule protein LAF-1, underscoring that, to a degree, the important LLPS-driving \\pi-related interactions are embodied in classical statistical potentials. Further elucidation will be necessary, however, especially of phenylalanine's role in condensate assembly because experiments on FtoA and YtoF mutants suggest that LLPS-driving phenylalanine interactions are significantly weaker than those posited by common statistical potentials. Protein-protein electrostatic interactions are modulated by relative permittivity, which depends on protein concentration . Analytical theory suggests that this dependence entails enhanced inter-protein interactions in the condensed phase but more favorable protein-solvent interactions in the dilute phase. The opposing trends lead to a modest overall impact on LLPS.", "edit_actions": [{"type": "D", "before": "Biomolecular condensates underlain by liquid-liquid phase separation (LLPS) of proteins and nucleic acids can serve important biological functions; yet current understanding of the effects of amino acid sequences on LLPS is limited.", "after": null, "start_char_pos": 0, "end_char_pos": 232}, {"type": "R", "before": "LLPS, we used", "after": "condensates underlain by liquid-liquid phase separation (LLPS), we conducted multiple-chain simulations of", "start_char_pos": 332, "end_char_pos": 345}, {"type": "D", "before": "the", "after": null, "start_char_pos": 402, "end_char_pos": 405}, {"type": "A", "before": null, "after": ",", "start_char_pos": 429, "end_char_pos": 429}, {"type": "R", "before": "to conduct extensive multiple-chain simulations to", "after": ", to", "start_char_pos": 445, "end_char_pos": 495}, {"type": "R", "before": "sequence-specific phase behaviors. Three different", "after": "amino acid sequence-dependent LLPS. We evaluated 3", "start_char_pos": 585, "end_char_pos": 635}, {"type": "R", "before": "sharing the same electrostatic potentialwere evaluated. We found that neither a common scheme based on amino acid hydrophobicity", "after": "with a shared electrostatic potential. Neither a common hydrophobicity scheme", "start_char_pos": 672, "end_char_pos": 800}, {"type": "R", "before": "can consistently account for the available", "after": "consistently accounted for the", "start_char_pos": 873, "end_char_pos": 915}, {"type": "R", "before": "mutant, a phenylalanine-to-alanine (FtoA) mutant and an arginine-to-lysine (RtoK ) mutant of the", "after": ", an FtoA, and an RtoK mutant of", "start_char_pos": 975, "end_char_pos": 1071}, {"type": "R", "before": "an interaction scheme", "after": "interactions", "start_char_pos": 1095, "end_char_pos": 1116}, {"type": "R", "before": "reproduces", "after": "reproduce", "start_char_pos": 1186, "end_char_pos": 1196}, {"type": "R", "before": "This finding underscores the important role of", "after": "Consistency between simulation and LLPS experiment was also found for RtoK mutants of P-granule protein LAF-1, underscoring that, to a degree, the important LLPS-driving", "start_char_pos": 1299, "end_char_pos": 1345}, {"type": "R", "before": "in LLPS and that their effects are embodied to a degree", "after": "are embodied", "start_char_pos": 1371, "end_char_pos": 1426}, {"type": "A", "before": null, "after": "Further elucidation will be necessary, however, especially of phenylalanine's role in condensate assembly because experiments on FtoA and YtoF mutants suggest that LLPS-driving phenylalanine interactions are significantly weaker than those posited by common statistical potentials.", "start_char_pos": 1464, "end_char_pos": 1464}, {"type": "D", "before": "in general", "after": null, "start_char_pos": 1554, "end_char_pos": 1564}, {"type": "D", "before": "in the aqueous medium", "after": null, "start_char_pos": 1598, "end_char_pos": 1619}, {"type": "D", "before": "only", "after": null, "start_char_pos": 1834, "end_char_pos": 1838}], "sents_char_pos": [0, 147, 232, 619, 727, 1081, 1298, 1463, 1621, 1805]} {"doc_id": "2005.07012", "revision_depth": "1", "before_revision": "The transmission of infectious diseases depends on the social networks among people and the protections that people have taken before being exposed to the disease. Mass media is playing a key role in making the public aware of the disease and its transmissibility and severity. Motivated by the importance of heterogeneous risk perception in the population to response to infectious disease outbreaks , we propose a heterogeneous three-layer network model, namely the Susceptible-Exposed-Infectious-Recovered Unaware-Aware-Protected (SEIR-UAP) model, where people's vulnerability to the disease is influenced by the processes of awareness information diffusion, preventive behavior changeand infectious disease transmission. We found that (a ) the awareness of the disease plays the central role in preventing disease outbreak ; (b) we need a reasonable ratio of \"over-reacting\" nodes to effectively control the disease outbreak ; (c) diseases with a longer incubation period and a higher recovery rate are easier to control because the processes of information diffusion and behavior change can help people prepare for the upcoming exposure to the disease ; (d) it is more difficult to control the disease with asymptomatic cases. The results provide evidence that mass media should not play down the transmissibility and severity of diseases , so that the public can become aware of the disease as soon as possible .", "after_revision": " Motivated by the importance of individual differences in risk perception and behavior change in people's responses to infectious disease outbreaks (particularly the ongoing COVID-19 pandemic) , we propose a heterogeneous Disease-Behavior-Information (hDBI) transmission model, in which people's risk of getting infected is influenced by information diffusion, behavior change, and disease transmission. We use both a mean-field approximation and Monte Carlo simulations to analyze the dynamics of the model. Information diffusion influences behavior change by allowing people to be aware of the disease and adopt self-protection, and subsequently affects disease transmission by changing the actual infection rate. Results show that (a) awareness plays a central role in epidemic prevention ; (b) a reasonable fraction of \"over-reacting\" nodes are needed in epidemic prevention ; (c) R0 has different effects on epidemic outbreak for cases with and without asymptomatic infection ; (d) social influence on behavior change can remarkably decrease the epidemic outbreak size. This research indicates that the media and opinion leaders should not understate the transmissibility and severity of diseases to ensure that people could become aware of the disease and adopt self-protection to protect themselves and the whole population .", "edit_actions": [{"type": "D", "before": "The transmission of infectious diseases depends on the social networks among people and the protections that people have taken before being exposed to the disease. Mass media is playing a key role in making the public aware of the disease and its transmissibility and severity.", "after": null, "start_char_pos": 0, "end_char_pos": 277}, {"type": "R", "before": "heterogeneous risk perception in the population to response to", "after": "individual differences in risk perception and behavior change in people's responses to", "start_char_pos": 309, "end_char_pos": 371}, {"type": "A", "before": null, "after": "(particularly the ongoing COVID-19 pandemic)", "start_char_pos": 401, "end_char_pos": 401}, {"type": "R", "before": "three-layer network model, namely the Susceptible-Exposed-Infectious-Recovered Unaware-Aware-Protected (SEIR-UAP) model, where", "after": "Disease-Behavior-Information (hDBI) transmission model, in which", "start_char_pos": 431, "end_char_pos": 557}, {"type": "R", "before": "vulnerability to the disease", "after": "risk of getting infected", "start_char_pos": 567, "end_char_pos": 595}, {"type": "D", "before": "the processes of awareness", "after": null, "start_char_pos": 613, "end_char_pos": 639}, {"type": "R", "before": "preventive behavior changeand infectious", "after": "behavior change, and", "start_char_pos": 663, "end_char_pos": 703}, {"type": "R", "before": "found that (a ) the awareness of the disease plays the", "after": "use both a mean-field approximation and Monte Carlo simulations to analyze the dynamics of the model. Information diffusion influences behavior change by allowing people to be aware of the disease and adopt self-protection, and subsequently affects disease transmission by changing the actual infection rate. Results show that (a) awareness plays a", "start_char_pos": 729, "end_char_pos": 783}, {"type": "R", "before": "preventing disease outbreak", "after": "epidemic prevention", "start_char_pos": 800, "end_char_pos": 827}, {"type": "R", "before": "we need a reasonable ratio", "after": "a reasonable fraction", "start_char_pos": 834, "end_char_pos": 860}, {"type": "R", "before": "to effectively control the disease outbreak", "after": "are needed in epidemic prevention", "start_char_pos": 886, "end_char_pos": 929}, {"type": "R", "before": "diseases with a longer incubation period and a higher recovery rate are easier to control because the processes of information diffusion and behavior change can help people prepare for the upcoming exposure to the disease", "after": "R0 has different effects on epidemic outbreak for cases with and without asymptomatic infection", "start_char_pos": 936, "end_char_pos": 1157}, {"type": "R", "before": "it is more difficult to control the disease with asymptomatic cases. The results provide evidence that mass media should not play down", "after": "social influence on behavior change can remarkably decrease the epidemic outbreak size. This research indicates that the media and opinion leaders should not understate", "start_char_pos": 1164, "end_char_pos": 1298}, {"type": "R", "before": ", so that the public can", "after": "to ensure that people could", "start_char_pos": 1345, "end_char_pos": 1369}, {"type": "R", "before": "as soon as possible", "after": "and adopt self-protection to protect themselves and the whole population", "start_char_pos": 1398, "end_char_pos": 1417}], "sents_char_pos": [0, 163, 277, 725, 829, 931, 1159, 1232]} {"doc_id": "2005.07473", "revision_depth": "1", "before_revision": " Online Social Networks have become an important medium for communication among people who suffer from mental disorders to share moments of hardship and to seek support. Here we analyze how Reddit discussions can help improve the health conditions of its users. Using emotional tone of user publications as a proxy for his emotional state, we uncover relationships between state changes and interactions he has in a given community. We observe that authors of negative posts often write more positive comments after engaging in discussions . Second, we build models based on state-of-the-art embedding techniques and RNNs to predict shifts in emotional tone. We show that it is possible to predict with good accuracy the reaction of users of mental disorder online communities to the interactions experienced in these platforms . Our models could assist in interventions promoted by health care professionals to provide support to people suffering from mental health illnesses.", "after_revision": "In recent years, Online Social Networks have become an important medium for people who suffer from mental disorders to share moments of hardship , and receive emotional and informational support. In this work, we analyze how discussions in Reddit communities related to mental disorders can help improve the health conditions of their users. Using the emotional tone of users' writing as a proxy for emotional state, we uncover relationships between user interactions and state changes. First, we observe that authors of negative posts often write rosier comments after engaging in discussions , indicating that users' emotional state can improve due to social support . Second, we build models based on SOTA text embedding techniques and RNNs to predict shifts in emotional tone. This differs from most of related work, which focuses primarily on detecting mental disorders from user activity. We demonstrate the feasibility of accurately predicting the users' reactions to the interactions experienced in these platforms , and present some examples which illustrate that the models are correctly capturing the effects of comments on the author's emotional tone . Our models hold promising implications for interventions to provide support for people struggling with mental illnesses.", "edit_actions": [{"type": "A", "before": null, "after": "In recent years,", "start_char_pos": 0, "end_char_pos": 0}, {"type": "D", "before": "communication among", "after": null, "start_char_pos": 60, "end_char_pos": 79}, {"type": "R", "before": "and to seek support. Here", "after": ", and receive emotional and informational support. In this work,", "start_char_pos": 149, "end_char_pos": 174}, {"type": "R", "before": "Reddit discussions", "after": "discussions in Reddit communities related to mental disorders", "start_char_pos": 190, "end_char_pos": 208}, {"type": "R", "before": "its", "after": "their", "start_char_pos": 251, "end_char_pos": 254}, {"type": "A", "before": null, "after": "the", "start_char_pos": 268, "end_char_pos": 268}, {"type": "R", "before": "user publications", "after": "users' writing", "start_char_pos": 287, "end_char_pos": 304}, {"type": "D", "before": "his", "after": null, "start_char_pos": 320, "end_char_pos": 323}, {"type": "R", "before": "state changes and interactions he has in a given community. We", "after": "user interactions and state changes. First, we", "start_char_pos": 374, "end_char_pos": 436}, {"type": "R", "before": "more positive", "after": "rosier", "start_char_pos": 488, "end_char_pos": 501}, {"type": "A", "before": null, "after": ", indicating that users' emotional state can improve due to social support", "start_char_pos": 541, "end_char_pos": 541}, {"type": "R", "before": "state-of-the-art", "after": "SOTA text", "start_char_pos": 577, "end_char_pos": 593}, {"type": "R", "before": "We show that it is possible to predict with good accuracy the reaction of users of mental disorder online communities", "after": "This differs from most of related work, which focuses primarily on detecting mental disorders from user activity. We demonstrate the feasibility of accurately predicting the users' reactions", "start_char_pos": 661, "end_char_pos": 778}, {"type": "A", "before": null, "after": ", and present some examples which illustrate that the models are correctly capturing the effects of comments on the author's emotional tone", "start_char_pos": 830, "end_char_pos": 830}, {"type": "R", "before": "could assist in interventions promoted by health care professionals", "after": "hold promising implications for interventions", "start_char_pos": 844, "end_char_pos": 911}, {"type": "R", "before": "to people suffering from mental health", "after": "for people struggling with mental", "start_char_pos": 931, "end_char_pos": 969}], "sents_char_pos": [0, 169, 261, 433, 543, 660, 832]} {"doc_id": "2005.07593", "revision_depth": "1", "before_revision": "Fire is an integral part of the Earth for millennia. Recent wildfires exhibited an unprecedented spatial and temporal extend and their control is beyond national firefighting capabilities. Prescribed or controlled burning treatments are debated as a potential measure for ameliorating the spread and intensity of wildfires. Machine learning analysis using random forests was performed in a spatio-temporal data set comprising a large number of savanna fires across 22 years. Results indicate that controlled fire return interval accounts of 3.5\\% of fire spread and 3.5\\% of fire intensity. Manipulating burn seasonality accounted for 5\\% of fire spread and 6\\% of fire intensity. While manipulated fire return interval and seasonality moderated both fire spread and intensity, their overall effects were low in comparison with hydrological and climatic variables. Predicting fire spread and intensity has been a poor endeavour thus far and we show that more data of the variables already monitored would not result in higher predictive accuracy. Given that the main driving factors of fire spread are related to hydrological and climatic variables, we suggest investigating further the use of climatic refugia against wildfires", "after_revision": "Fire is an integral part of the Earth for millennia. Several recent wildfires have exhibited an unprecedented spatial and temporal extent and their control is beyond national firefighting capabilities. Prescribed or controlled burning treatments are debated as a potential measure for ameliorating the spread and intensity of wildfires. Machine learning analysis using random forests was performed in a spatio-temporal data set comprising a large number of savanna fires across 22 years. Results indicate that controlled fire return interval exhibited a feature importance of 3.5\\% regarding fire spread rate and 3.5\\% regarding fire intensity. Manipulating burn seasonality showed a feature importance of 5\\% regarding fire spread rate and 6\\% regarding fire intensity. While manipulated fire return interval and seasonality moderated both fire spread rate and intensity, their overall effects were low in comparison with meteorological ( hydrological and climatic ) variables. Predicting fire spread rate and intensity has been a poor endeavour thus far and we show that more data of the variables already monitored would not result in higher predictive accuracy. ", "edit_actions": [{"type": "R", "before": "Recent wildfires", "after": "Several recent wildfires have", "start_char_pos": 53, "end_char_pos": 69}, {"type": "R", "before": "extend", "after": "extent", "start_char_pos": 118, "end_char_pos": 124}, {"type": "R", "before": "accounts", "after": "exhibited a feature importance", "start_char_pos": 529, "end_char_pos": 537}, {"type": "R", "before": "of fire spread", "after": "regarding fire spread rate", "start_char_pos": 547, "end_char_pos": 561}, {"type": "R", "before": "of", "after": "regarding", "start_char_pos": 572, "end_char_pos": 574}, {"type": "R", "before": "accounted for", "after": "showed a feature importance of", "start_char_pos": 621, "end_char_pos": 634}, {"type": "R", "before": "of fire spread", "after": "regarding fire spread rate", "start_char_pos": 639, "end_char_pos": 653}, {"type": "R", "before": "of", "after": "regarding", "start_char_pos": 662, "end_char_pos": 664}, {"type": "A", "before": null, "after": "rate", "start_char_pos": 763, "end_char_pos": 763}, {"type": "A", "before": null, "after": "meteorological (", "start_char_pos": 829, "end_char_pos": 829}, {"type": "A", "before": null, "after": ")", "start_char_pos": 856, "end_char_pos": 856}, {"type": "A", "before": null, "after": "rate", "start_char_pos": 891, "end_char_pos": 891}, {"type": "D", "before": "Given that the main driving factors of fire spread are related to hydrological and climatic variables, we suggest investigating further the use of climatic refugia against wildfires", "after": null, "start_char_pos": 1051, "end_char_pos": 1232}], "sents_char_pos": [0, 52, 188, 323, 474, 590, 680, 867, 1050]} {"doc_id": "2005.08505", "revision_depth": "1", "before_revision": "We study how the coronavirus disease 2019 ( COVID-19 ) pandemic, alongside the severe mobility restrictions that ensued, has impacted information access on Wikipedia, the world's largest online encyclopedia. A longitudinal analysis that combines pageview statistics for 12 Wikipedia \\WP language editions with mobility reports published by Apple and Google reveals a massive increase in access volume, accompanied by a stark shift in topical interests. Health- and entertainment- related topics are found to have gained, and sports- and transportation- related topics, to have lost attention . Interestingly, while the interest in health-related topics was transient , that in entertainment topics is lingering and even increasing . These changes began at the time when mobility was restricted and are most pronounced for language editions associated with countries , in which the most severe mobility restrictions were implemented , indicating that the interest shift might be caused by people's spending more time at home\\eg\\hyp \\hyp \\hyp . Our results highlight the utility of Wikipedia for studying reactions to the pandemic across the globe, and illustrate how the disease is rippling through society \\WP .", "after_revision": "We study how the COVID-19 pandemic, alongside the severe mobility restrictions that ensued, has impacted information access on Wikipedia, the world's largest online encyclopedia. A longitudinal analysis that combines pageview statistics for 12 \\WP language editions with mobility reports published by Apple and Google reveals massive shifts in thevolume and thenature of information seeking patterns amidst the pandemic . Interestingly, while we observe a transient increase in Wikipedia's pageview volume following mobility restrictions, the nature of information sought was impacted more permanently . These changes are most pronounced for language editions associated with countries in which the most severe mobility restrictions were implemented . We also find that articles belonging to different topics behaved differently;\\eg, attention towards entertainment\\hyp related topics is lingering and even increasing, while the interest in health\\hyp and biology\\hyp related topics was either small or transient . Our results highlight the utility of \\WP for studying how the pandemic affected people's needs, interests, and concerns .", "edit_actions": [{"type": "D", "before": "coronavirus disease 2019 (", "after": null, "start_char_pos": 17, "end_char_pos": 43}, {"type": "D", "before": ")", "after": null, "start_char_pos": 53, "end_char_pos": 54}, {"type": "D", "before": "Wikipedia", "after": null, "start_char_pos": 273, "end_char_pos": 282}, {"type": "R", "before": "a massive increase in access volume, accompanied by a stark shift in topical interests. Health- and entertainment- related topics are found to have gained, and sports- and transportation- related topics, to have lost attention", "after": "massive shifts in the", "start_char_pos": 365, "end_char_pos": 591}, {"type": "A", "before": null, "after": "volume", "start_char_pos": 591, "end_char_pos": 591}, {"type": "A", "before": null, "after": "and the", "start_char_pos": 592, "end_char_pos": 592}, {"type": "A", "before": null, "after": "nature", "start_char_pos": 592, "end_char_pos": 592}, {"type": "A", "before": null, "after": "of information seeking patterns amidst the pandemic", "start_char_pos": 593, "end_char_pos": 593}, {"type": "R", "before": "the interest in health-related topics was transient , that in entertainment topics is lingering and even increasing", "after": "we observe a transient increase in Wikipedia's pageview volume following mobility restrictions, the nature of information sought was impacted more permanently", "start_char_pos": 617, "end_char_pos": 732}, {"type": "D", "before": "began at the time when mobility was restricted and", "after": null, "start_char_pos": 749, "end_char_pos": 799}, {"type": "D", "before": ",", "after": null, "start_char_pos": 868, "end_char_pos": 869}, {"type": "R", "before": ", indicating that the interest shift might be caused by people's spending more time at home", "after": ". We also find that articles belonging to different topics behaved differently;", "start_char_pos": 934, "end_char_pos": 1025}, {"type": "A", "before": null, "after": ", attention towards entertainment", "start_char_pos": 1028, "end_char_pos": 1028}, {"type": "A", "before": null, "after": "related topics is lingering and even increasing, while the interest in health", "start_char_pos": 1033, "end_char_pos": 1033}, {"type": "A", "before": null, "after": "and biology", "start_char_pos": 1038, "end_char_pos": 1038}, {"type": "A", "before": null, "after": "related topics was either small or transient", "start_char_pos": 1043, "end_char_pos": 1043}, {"type": "D", "before": "Wikipedia for studying reactions to the pandemic across the globe, and illustrate how the disease is rippling through society", "after": null, "start_char_pos": 1083, "end_char_pos": 1208}, {"type": "A", "before": null, "after": "for studying how the pandemic affected people's needs, interests, and concerns", "start_char_pos": 1213, "end_char_pos": 1213}], "sents_char_pos": [0, 207, 452, 595, 734, 1045]} {"doc_id": "2005.08505", "revision_depth": "2", "before_revision": "We study how the COVID-19 pandemic, alongside the severe mobility restrictions that ensued, has impacted information access on Wikipedia, the world's largest online encyclopedia. A longitudinal analysis that combines pageview statistics for 12 %DIFDELCMD < \\WP %%% language editions with mobility reports published by Apple and Google reveals massive shifts in the volume and thenature of information seeking patterns amidst the pandemic. Interestingly, while we observe a transient increase in Wikipedia's pageview volume following mobility restrictions, the nature of information sought was impacted more permanently. These changes are most pronounced for language editions associated with countries in which the most severe mobility restrictions were implemented. We also find that articles belonging to different topics behaved differently; %DIFDELCMD < \\eg%%% , attention towards entertainment%DIFDELCMD < \\hyp %%% related topics is lingering and even increasing, while the interest in health%DIFDELCMD < \\hyp %%% and biology%DIFDELCMD < \\hyp %%% related topics was either small or transient. Our results highlight the utility of %DIFDELCMD < \\WP %%% for studying how the pandemic affected people's needs, interests, and concerns.", "after_revision": "We study how the COVID-19 pandemic, alongside the severe mobility restrictions that ensued, has impacted information access on Wikipedia, the world's largest online encyclopedia. A longitudinal analysis that combines pageview statistics for 12 %DIFDELCMD < \\WP %%% Wikipedia language editions with mobility reports published by Apple and Google reveals massive shifts in the volume and nature of information seeking patterns during the pandemic. Interestingly, while we observe a transient increase in Wikipedia's pageview volume following mobility restrictions, the nature of information sought was impacted more permanently. These changes are most pronounced for language editions associated with countries where the most severe mobility restrictions were implemented. We also find that articles belonging to different topics behaved differently; %DIFDELCMD < \\eg%%% e.g. , attention towards %DIFDELCMD < \\hyp %%% entertainment-related topics is lingering and even increasing, while the interest in %DIFDELCMD < \\hyp %%% %DIFDELCMD < \\hyp %%% health- and biology-related topics was either small or transient. Our results highlight the utility of %DIFDELCMD < \\WP %%% Wikipedia for studying how the pandemic is affecting people's needs, interests, and concerns.", "edit_actions": [{"type": "A", "before": null, "after": "Wikipedia", "start_char_pos": 265, "end_char_pos": 265}, {"type": "D", "before": "volume", "after": null, "start_char_pos": 366, "end_char_pos": 372}, {"type": "D", "before": "and the", "after": null, "start_char_pos": 373, "end_char_pos": 380}, {"type": "R", "before": "nature", "after": "volume and nature", "start_char_pos": 380, "end_char_pos": 386}, {"type": "R", "before": "amidst", "after": "during", "start_char_pos": 419, "end_char_pos": 425}, {"type": "R", "before": "in which", "after": "where", "start_char_pos": 703, "end_char_pos": 711}, {"type": "A", "before": null, "after": "e.g.", "start_char_pos": 866, "end_char_pos": 866}, {"type": "D", "before": "entertainment", "after": null, "start_char_pos": 887, "end_char_pos": 900}, {"type": "R", "before": "related", "after": "entertainment-related", "start_char_pos": 922, "end_char_pos": 929}, {"type": "D", "before": "health", "after": null, "start_char_pos": 993, "end_char_pos": 999}, {"type": "D", "before": "and biology", "after": null, "start_char_pos": 1021, "end_char_pos": 1032}, {"type": "R", "before": "related", "after": "health- and biology-related", "start_char_pos": 1054, "end_char_pos": 1061}, {"type": "A", "before": null, "after": "Wikipedia", "start_char_pos": 1158, "end_char_pos": 1158}, {"type": "R", "before": "affected", "after": "is affecting", "start_char_pos": 1189, "end_char_pos": 1197}], "sents_char_pos": [0, 178, 439, 620, 767, 845, 1099]} {"doc_id": "2005.09865", "revision_depth": "1", "before_revision": "A considerable literature is devoted to the introduction and analysis of variants of the SI epidemiology models. Similar models are also proposed to describe the spread of riots and, more generally, of collective behaviors in various social contexts. The use of epidemiology models to describe such social phenomena is based on the analogy between the mechanisms of contagion and social imitation. In turn, this analogy also points to the social nature of epidemics. This paper is concerned with a family of Reaction-Diffusion systems introduced in [ 17 ] that aims at unifying, generalizing, and enlarging the fields of application for epidemiologyand collective behavior models .In this paper, we propose a modeling approach on these apparently various phenomena through the example of the dynamics of social unrest. The model involves two quantities , the level of social unrest, or , more general, activity u, and a field of social tension v, which play asymmetric roles : u is thought of as the actual observed or explicit quantity while v is an ambiant, sometimes implicit field of susceptibility that modulates the growth of u. In this article, we explore this class of model and prove several theoretical results based on the framework developed in [ 17 ], of which the present work is a companion paper. Here we place the emphasis on two subclasses of systems defined in 17 : tension inhibiting and tension enhancing. These are characterized by the fact that the unrest has respectively a negative or positive feedback on the social tension (though no monotonicity condition is assumed). In 17%DIFDELCMD < ] %%% we derive a threshold phenomenon in terms of the initial level of social tension: below a critical value, a small triggering event is quickly followed by a resumption of calm, while, above this value, it generates an eruption of social unrest spreading through space with an asymptotically constant speed. The new results we derive in the present paper concern the behavior of the solution far from the propagating edge, that is, we give a description of the new regime of the system following the initial surge of activity. We show in particular that the model can give rise to many diverse qualitative dynamics : ephemeral or limited-duration social movements-referred to as \"riots\"-in the tension inhibiting case, and persisting social movements-lasting upheavals-in the tension enhancing case, as well as other more complex behaviors in some mixed cases . We also investigate this model by numerical simulations that highlight the richness of this framework. We finally propose and study extensions of the model, such as spatially heterogeneous systems .", "after_revision": " This paper is concerned with a family of Reaction-Diffusion systems that we introduced in [ 15 ] , and that generalizes the SIR type models from epidemiology. Such systems are now also used to describe collective behaviors .In this paper, we propose a modeling approach for these apparently diverse phenomena through the example of the dynamics of social unrest. The model involves two quantities : the level of social unrest, or more generally activity, u, and a field of social tension v, which play asymmetric roles . We think of u as the actually observed or explicit quantity while v is an ambiant, sometimes implicit , field of susceptibility that modulates the dynamics of u. In this article, we explore this class of model and prove several theoretical results based on the framework developed in [ 15 ], of which the present work is a companion paper. We particularly emphasize here two subclasses of systems : tension inhibiting and tension enhancing. These are characterized by respectively a negative or %DIFDELCMD < ] %%% a positivefeedback of the unrest on social tension. We establish several properties for these classes and also study some extensions. In particular, we describe the behavior of the system following an initial surge of activity. We show that the model can give rise to many diverse qualitative dynamics . We also provide a variety of numerical simulations to illustrate our results and to reveal further properties and open questions .", "edit_actions": [{"type": "D", "before": "A considerable literature is devoted to the introduction and analysis of variants of the SI epidemiology models. Similar models are also proposed to describe the spread of riots and, more generally, of collective behaviors in various social contexts. The use of epidemiology models to describe such social phenomena is based on the analogy between the mechanisms of contagion and social imitation. In turn, this analogy also points to the social nature of epidemics.", "after": null, "start_char_pos": 0, "end_char_pos": 466}, {"type": "A", "before": null, "after": "that we", "start_char_pos": 535, "end_char_pos": 535}, {"type": "R", "before": "17", "after": "15", "start_char_pos": 552, "end_char_pos": 554}, {"type": "R", "before": "that aims at unifying, generalizing, and enlarging the fields of application for epidemiologyand collective behavior models", "after": ", and that generalizes the SIR type models from epidemiology. Such systems are now also used to describe collective behaviors", "start_char_pos": 557, "end_char_pos": 680}, {"type": "R", "before": "on these apparently various", "after": "for these apparently diverse", "start_char_pos": 728, "end_char_pos": 755}, {"type": "R", "before": ",", "after": ":", "start_char_pos": 854, "end_char_pos": 855}, {"type": "R", "before": ", more general, activity", "after": "more generally activity,", "start_char_pos": 887, "end_char_pos": 911}, {"type": "R", "before": ": u is thought of as the actual", "after": ". We think of u as the actually", "start_char_pos": 976, "end_char_pos": 1007}, {"type": "A", "before": null, "after": ",", "start_char_pos": 1080, "end_char_pos": 1080}, {"type": "R", "before": "growth", "after": "dynamics", "start_char_pos": 1124, "end_char_pos": 1130}, {"type": "R", "before": "17", "after": "15", "start_char_pos": 1261, "end_char_pos": 1263}, {"type": "R", "before": "Here we place the emphasis on", "after": "We particularly emphasize here", "start_char_pos": 1315, "end_char_pos": 1344}, {"type": "D", "before": "defined in", "after": null, "start_char_pos": 1371, "end_char_pos": 1381}, {"type": "D", "before": "17", "after": null, "start_char_pos": 1382, "end_char_pos": 1384}, {"type": "D", "before": "the fact that the unrest has", "after": null, "start_char_pos": 1456, "end_char_pos": 1484}, {"type": "D", "before": "positive feedback on the social tension (though no monotonicity condition is assumed). In", "after": null, "start_char_pos": 1512, "end_char_pos": 1601}, {"type": "D", "before": "17", "after": null, "start_char_pos": 1602, "end_char_pos": 1604}, {"type": "R", "before": "we derive a threshold phenomenon in terms of the initial level of social tension: below a critical value, a small triggering event is quickly followed by a resumption of calm, while, above this value, it generates an eruption of social unrest spreading through space with an asymptotically constant speed. The new results we derive in the present paper concern the", "after": "a positivefeedback of the unrest on social tension. We establish several properties for these classes and also study some extensions. In particular, we describe the", "start_char_pos": 1623, "end_char_pos": 1987}, {"type": "R", "before": "solution far from the propagating edge, that is, we give a description of the new regime of the system following the", "after": "system following an", "start_char_pos": 2004, "end_char_pos": 2120}, {"type": "D", "before": "in particular", "after": null, "start_char_pos": 2156, "end_char_pos": 2169}, {"type": "D", "before": ": ephemeral or limited-duration social movements-referred to as \"riots\"-in the tension inhibiting case, and persisting social movements-lasting upheavals-in the tension enhancing case, as well as other more complex behaviors in some mixed cases", "after": null, "start_char_pos": 2236, "end_char_pos": 2480}, {"type": "R", "before": "investigate this model by numerical simulations that highlight the richness of this framework. We finally propose and study extensions of the model, such as spatially heterogeneous systems", "after": "provide a variety of numerical simulations to illustrate our results and to reveal further properties and open questions", "start_char_pos": 2491, "end_char_pos": 2679}], "sents_char_pos": [0, 112, 250, 397, 466, 682, 819, 1136, 1314, 1428, 1598, 1928, 2147, 2482, 2585]} {"doc_id": "2005.10731", "revision_depth": "2", "before_revision": "Volunteer crowdsourcing platforms , such as food URLanizations, match volunteers with tasks which are often recurring. To ensure completion of such tasks, platforms frequently use a commitment lever known as \"adoption . \" Despite being effective in reducing match uncertainty, high levels of adoption reduce match availability for volunteers which in turn can suppress future engagement . We study how platforms should balance these two opposing factors . Our research is motivated by a collaboration with Food Rescue U.S. (FRUS), a volunteer-based food URLanization active in over 33 locationsacross the U. S. For platforms such as FRUS, success crucially depends on efficient volunteer utilization and engagement. Consequently, effectively utilizing non-monetary levers, such as adoption, is critical. Based on our analysis of fine-grained FRUS data, we develop a model for a repeated two-sided matching market consisting of tasks (prearranged donations) and volunteers . Our model incorporates the uncertainty in match compatibility as well as the negative impact of failing to match on future engagement. We study the platform's optimal policy for setting the adoption level to maximize the total discounted number of matches. Our analysis reveals that the optimal myopic policy is either full or no adoption. For sufficiently thick markets , we show that such a myopic policy is also optimal in the long run. In thinner markets, even though a static policy of full or no adoption can be suboptimal, we show it achieves a constant-factor approximation where the factor improves with market thickness. Using our analytical and empirical results, we revisit the current design of the FRUS platform and make location-specific policy recommendations. More broadly, our work sheds light on how other two-sided platforms can control the double-edged impacts that commitment levers have on growth and engagement .", "after_revision": "Volunteer crowdsourcing platforms match volunteers with tasks which are often recurring. To ensure completion of such tasks, platforms frequently use a lever known as \"adoption , \" which amounts to a commitment by the volunteer to repeatedly perform the task. Despite reducing match uncertainty, high levels of adoption can decrease the probability of forming new matches, which in turn can suppress growth . We study how platforms should manage this trade-off . Our research is motivated by a collaboration with Food Rescue U.S. (FRUS), a volunteer-based food URLanization active in over 30 locations. For platforms such as FRUS, success crucially depends on volunteer engagement. Consequently, effectively utilizing non-monetary levers, such as adoption, is critical. Motivated by the volunteer management literature and our analysis of FRUS data, we develop a model for two-sided markets which repeatedly match volunteers with tasks . Our model incorporates match uncertainty as well as the negative impact of failing to match on future engagement. We study the platform's optimal policy for setting the adoption level to maximize the total discounted number of matches. We fully characterize the optimal myopic policy and show that it takes a simple form: depending on volunteer characteristics and market thickness, either allow for full adoption or disallow adoption. In the long run , we show that such a policy is either optimal or achieves a constant-factor approximation . Our finding is robust to incorporating heterogeneity in volunteer behavior. Our work sheds light on how two-sided platforms need to carefully control the double-edged impacts that commitment levers have on growth and engagement . A one-size-fits-all solution may not be effective, as the optimal design crucially depends on the characteristics of the volunteer population .", "edit_actions": [{"type": "D", "before": ", such as food URLanizations,", "after": null, "start_char_pos": 34, "end_char_pos": 63}, {"type": "D", "before": "commitment", "after": null, "start_char_pos": 182, "end_char_pos": 192}, {"type": "R", "before": ".", "after": ",", "start_char_pos": 218, "end_char_pos": 219}, {"type": "R", "before": "Despite being effective in", "after": "which amounts to a commitment by the volunteer to repeatedly perform the task. Despite", "start_char_pos": 222, "end_char_pos": 248}, {"type": "R", "before": "reduce match availability for volunteers", "after": "can decrease the probability of forming new matches,", "start_char_pos": 301, "end_char_pos": 341}, {"type": "R", "before": "future engagement", "after": "growth", "start_char_pos": 369, "end_char_pos": 386}, {"type": "R", "before": "balance these two opposing factors", "after": "manage this trade-off", "start_char_pos": 419, "end_char_pos": 453}, {"type": "R", "before": "33 locationsacross the U. S.", "after": "30 locations.", "start_char_pos": 582, "end_char_pos": 610}, {"type": "R", "before": "efficient volunteer utilization and", "after": "volunteer", "start_char_pos": 668, "end_char_pos": 703}, {"type": "R", "before": "Based on", "after": "Motivated by the volunteer management literature and", "start_char_pos": 804, "end_char_pos": 812}, {"type": "D", "before": "fine-grained", "after": null, "start_char_pos": 829, "end_char_pos": 841}, {"type": "D", "before": "a repeated", "after": null, "start_char_pos": 876, "end_char_pos": 886}, {"type": "R", "before": "matching market consisting of tasks (prearranged donations) and volunteers", "after": "markets which repeatedly match volunteers with tasks", "start_char_pos": 897, "end_char_pos": 971}, {"type": "R", "before": "the uncertainty in match compatibility", "after": "match uncertainty", "start_char_pos": 997, "end_char_pos": 1035}, {"type": "R", "before": "Our analysis reveals that", "after": "We fully characterize", "start_char_pos": 1231, "end_char_pos": 1256}, {"type": "R", "before": "is either full or no adoption. For sufficiently thick markets", "after": "and show that it takes a simple form: depending on volunteer characteristics and market thickness, either allow for full adoption or disallow adoption. In the long run", "start_char_pos": 1283, "end_char_pos": 1344}, {"type": "R", "before": "myopic policy is also optimal in the long run. In thinner markets, even though a static policy of full or no adoption can be suboptimal, we show it", "after": "policy is either optimal or", "start_char_pos": 1367, "end_char_pos": 1514}, {"type": "R", "before": "where the factor improves with market thickness. Using our analytical and empirical results, we revisit the current design of the FRUS platform and make location-specific policy recommendations. More broadly, our", "after": ". Our finding is robust to incorporating heterogeneity in volunteer behavior. Our", "start_char_pos": 1556, "end_char_pos": 1768}, {"type": "D", "before": "other", "after": null, "start_char_pos": 1793, "end_char_pos": 1798}, {"type": "R", "before": "can", "after": "need to carefully", "start_char_pos": 1819, "end_char_pos": 1822}, {"type": "A", "before": null, "after": ". A one-size-fits-all solution may not be effective, as the optimal design crucially depends on the characteristics of the volunteer population", "start_char_pos": 1909, "end_char_pos": 1909}], "sents_char_pos": [0, 118, 388, 455, 715, 803, 973, 1108, 1230, 1313, 1413, 1604, 1750]} {"doc_id": "2005.11161", "revision_depth": "1", "before_revision": "Multiple random walks is a model for movement of several independent random walkers on a graph, and it is applied to various graph algorithms. In order to design an efficient graph algorithm using multiple random walks, it is essential to study theoretical considerations for deeply understanding the characteristicsof graph algorithms . The first meeting time is one of the important metrics for the multiple random walks. The first meeting time is defined as the time it takes for multiple random walkers to meet on a same node . The first meeting time is closely related to the rendezvous problem . In various works, the first meeting time of multiple random walk has been analyzed. However, many of these previous works focused on regular graphs. In this paper, we analyze first meeting time of multiple random walks in arbitrary graph , and clarify the effect of graph structures on its expected value. First, we derive the spectral formula for the expected first meeting time of two random walkers using the spectral graph theory. Then, the principal component of the expected first meeting time are examined using the derived spectral formula. The resulting principal component reveals that (a)the expected first meeting time is almost dominated by n/(1+d_{\\rm std}^2/d_{\\rm avg}^2) and (b)the expected first meeting time is independent of the beginning nodes of multiple random walks where n is the number of nodes . d_{\\rm avg} and d_{\\rm std} are the mean and standard deviation of the weighted degree of each node , respectively. n and d_{\\rm avg, and d_{\\rm std} are related to the statistics of graph structures} . According to the analysis result , the variance of coefficient for weighted degrees, d_{\\rm std}/d_{\\rm avg}(degree heterogeneity) , facilitates quicker the meeting of random walkers.", "after_revision": "Multiple random walks are the movement of several independent random walkers on a graph, and are applied to various graph algorithms. In order to design an efficient graph algorithm based on multiple random walks, it is essential to study multiple random walks theoretically for deeply understanding their characteristics . The first meeting time is one of the important metrics for the multiple random walks. The first meeting time on a graph is defined by the time it takes for multiple random walkers to meet at the same node in graph . The first meeting time is closely related to the rendezvous problem , which is are a fundamental problem in the field of computer science. In previous works, the first meeting time of multiple random walks has been analyzed. However, many of these previous works focus on regular graphs. In this paper, we analyze the first meeting time of multiple random walks in arbitrary graphs , and clarify the effect of graph structures on its expected value. First, we derive the spectral formula of the expected first meeting time on the basis of the spectral graph theory. Then, we examine the principal component of the expected first meeting time using the derived spectral formula. The clarified principal component reveals that (a)the expected first meeting time is almost dominated by n/(1+d_{\\rm std}^2/d_{\\rm avg}^2) and (b)the expected first meeting time is independent of the starting nodes of random walkers where n is the number of nodes of the graph . d_{\\rm avg} and d_{\\rm std} are the mean and the standard deviation of weighted node degrees , respectively. , and d_{\\rm std} are related to the statistics of graph structures} The characteristics (a) is useful to understand the effect of the graph structure on the first meeting time . According to the revealed effect of graph structures , the variance of coefficient d_{\\rm std}/d_{\\rm avg}(degree heterogeneity) for weighted degrees facilitates quicker the meeting of random walkers.", "edit_actions": [{"type": "R", "before": "is a model for", "after": "are the", "start_char_pos": 22, "end_char_pos": 36}, {"type": "R", "before": "it is", "after": "are", "start_char_pos": 100, "end_char_pos": 105}, {"type": "R", "before": "using", "after": "based on", "start_char_pos": 191, "end_char_pos": 196}, {"type": "R", "before": "theoretical considerations", "after": "multiple random walks theoretically", "start_char_pos": 245, "end_char_pos": 271}, {"type": "R", "before": "the characteristicsof graph algorithms", "after": "their characteristics", "start_char_pos": 297, "end_char_pos": 335}, {"type": "R", "before": "is defined as", "after": "on a graph is defined by", "start_char_pos": 447, "end_char_pos": 460}, {"type": "R", "before": "on a same node", "after": "at the same node in graph", "start_char_pos": 515, "end_char_pos": 529}, {"type": "R", "before": ". In various", "after": ", which is are a fundamental problem in the field of computer science. In previous", "start_char_pos": 600, "end_char_pos": 612}, {"type": "R", "before": "walk", "after": "walks", "start_char_pos": 662, "end_char_pos": 666}, {"type": "R", "before": "focused", "after": "focus", "start_char_pos": 724, "end_char_pos": 731}, {"type": "A", "before": null, "after": "the", "start_char_pos": 777, "end_char_pos": 777}, {"type": "R", "before": "graph", "after": "graphs", "start_char_pos": 835, "end_char_pos": 840}, {"type": "R", "before": "for", "after": "of", "start_char_pos": 947, "end_char_pos": 950}, {"type": "R", "before": "of two random walkers using", "after": "on the basis of", "start_char_pos": 983, "end_char_pos": 1010}, {"type": "A", "before": null, "after": "we examine", "start_char_pos": 1044, "end_char_pos": 1044}, {"type": "D", "before": "are examined", "after": null, "start_char_pos": 1104, "end_char_pos": 1116}, {"type": "R", "before": "resulting", "after": "clarified", "start_char_pos": 1157, "end_char_pos": 1166}, {"type": "R", "before": "beginning nodes of multiple random walks", "after": "starting nodes of random walkers", "start_char_pos": 1353, "end_char_pos": 1393}, {"type": "A", "before": null, "after": "of the graph", "start_char_pos": 1425, "end_char_pos": 1425}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1473, "end_char_pos": 1473}, {"type": "R", "before": "the weighted degree of each node", "after": "weighted node degrees", "start_char_pos": 1496, "end_char_pos": 1528}, {"type": "D", "before": "n and d_{\\rm avg", "after": null, "start_char_pos": 1545, "end_char_pos": 1561}, {"type": "A", "before": null, "after": "The characteristics (a) is useful to understand the effect of the graph structure on the first meeting time", "start_char_pos": 1630, "end_char_pos": 1630}, {"type": "R", "before": "analysis result", "after": "revealed effect of graph structures", "start_char_pos": 1650, "end_char_pos": 1665}, {"type": "D", "before": "for weighted degrees,", "after": null, "start_char_pos": 1696, "end_char_pos": 1717}, {"type": "R", "before": ",", "after": "for weighted degrees", "start_char_pos": 1764, "end_char_pos": 1765}], "sents_char_pos": [0, 142, 337, 423, 531, 601, 685, 750, 908, 1037, 1152, 1291, 1763]} {"doc_id": "2005.11232", "revision_depth": "1", "before_revision": "We consider the problem of computing \\sum_x e^{f(x)}, where f (x)=\\sum_{ij a_{ij} \\xi_i \\xi_j + \\sum_i b_i \\xi_i is a real-valued quadratic function and x=}%DIFDELCMD < \\left(%%% \\xi_1, \\ldots, \\xi_n%DIFDELCMD < \\right) %%% ranges over{\\Bbb the Boolean cube \\{-1, 1\\}^n. We prove that for any \\delta >0, fixed in advance, the value of \\sum_x e^{f(x) can be approximated within relative error 0 < \\epsilon < 1 is quasi-polynomial n^{O(\\ln n - \\ln \\epsilon)} time , as long as \\sum_j |a_{ij| \\leq } 1-\\delta for all i. We apply the method of polynomial interpolation, for which we prove that \\sum_x e^{ f \\tilde{f} (x)} \\ne 0 for complex a _{ij and b_i such that \\sum_j |\\Re}%DIFDELCMD < \\thinspace %%% a _{ij| \\leq 1-\\delta, \\sum_j |\\Im}%DIFDELCMD < \\thinspace %%% a_{ij| \\leq \\delta^2/10 and |\\Im}%DIFDELCMD < \\thinspace %%% b_i| \\leq \\delta^2/10 for all i, which is\\tilde{f} interpreted as the absence of a phase transition in the Lee - Yang sense in the corresponding Ising model. The bounds are asymptotically optimal. The novel feature of the bounds is that they control the total interaction of each vertex but not every pairwise interaction .", "after_revision": "We consider the problem of computing the partition function \\sum_x e^{f(x)}, where f a_{ij} \\xi_i \\xi_j + \\sum_i b_i \\xi_i is a real-valued quadratic function and x=}%DIFDELCMD < \\left(%%% %DIFDELCMD < \\right) %%% : \\{-1, 1\\}^n \\longrightarrow{\\Bbb R is a quadratic or cubic polynomial on the Boolean cube \\{-1, 1\\}^n. In the case of a quadratic polynomial f, we show that the partition function can be approximated within relative error 0 < \\epsilon < 1 in quasi-polynomial n^{O(\\ln n - \\ln \\epsilon)} time | \\leq } if the Lipschitz constant of the non-linear part of f with respect to the \\ell^1 metric on the Boolean cube does not exceed 1-\\delta , for any \\delta >0, fixed in advance. For a cubic polynomial f, we get the same result under a somewhat stronger condition. We apply the method of polynomial interpolation, for which we prove that \\sum_x e^{ \\tilde{f} (x)} \\ne 0 for and b_i such that \\sum_j |\\Re}%DIFDELCMD < \\thinspace %%% | \\leq 1-\\delta, \\sum_j |\\Im}%DIFDELCMD < \\thinspace %%% | \\leq \\delta^2/10 and |\\Im}%DIFDELCMD < \\thinspace %%% complex-valued polynomials\\tilde{f} in a neighborhood of a real-valued f satisfying the above mentioned conditions. The bounds are asymptotically optimal. Results on the zero-free region are interpreted as the absence of a phase transition in the Lee - Yang sense in the corresponding Ising model. The novel feature of the bounds is that they control the total interaction of each vertex but not every single interaction of sets of vertices .", "edit_actions": [{"type": "A", "before": null, "after": "the partition function", "start_char_pos": 37, "end_char_pos": 37}, {"type": "D", "before": "(x)=\\sum_{ij", "after": null, "start_char_pos": 63, "end_char_pos": 75}, {"type": "D", "before": "\\xi_1, \\ldots, \\xi_n", "after": null, "start_char_pos": 180, "end_char_pos": 200}, {"type": "R", "before": "ranges over", "after": ": \\{-1, 1\\}^n \\longrightarrow", "start_char_pos": 225, "end_char_pos": 236}, {"type": "A", "before": null, "after": "R", "start_char_pos": 242, "end_char_pos": 242}, {"type": "A", "before": null, "after": "is a quadratic or cubic polynomial on", "start_char_pos": 243, "end_char_pos": 243}, {"type": "R", "before": "We prove that for any \\delta >0, fixed in advance, the value of \\sum_x e^{f(x)", "after": "In the case of a quadratic polynomial f, we show that the partition function", "start_char_pos": 274, "end_char_pos": 352}, {"type": "R", "before": "is", "after": "in", "start_char_pos": 412, "end_char_pos": 414}, {"type": "D", "before": ", as long as \\sum_j |a_{ij", "after": null, "start_char_pos": 465, "end_char_pos": 491}, {"type": "A", "before": null, "after": "if the Lipschitz constant of the non-linear part of f with respect to the \\ell^1 metric on the Boolean cube does not exceed", "start_char_pos": 500, "end_char_pos": 500}, {"type": "R", "before": "for all i.", "after": ", for any \\delta >0, fixed in advance. For a cubic polynomial f, we get the same result under a somewhat stronger condition.", "start_char_pos": 510, "end_char_pos": 520}, {"type": "D", "before": "f", "after": null, "start_char_pos": 605, "end_char_pos": 606}, {"type": "D", "before": "complex a _{ij", "after": null, "start_char_pos": 632, "end_char_pos": 646}, {"type": "D", "before": "a _{ij", "after": null, "start_char_pos": 705, "end_char_pos": 711}, {"type": "D", "before": "a_{ij", "after": null, "start_char_pos": 768, "end_char_pos": 773}, {"type": "R", "before": "b_i| \\leq \\delta^2/10 for all i, which is", "after": "complex-valued polynomials", "start_char_pos": 829, "end_char_pos": 870}, {"type": "A", "before": null, "after": "in a neighborhood of a real-valued f satisfying the above mentioned conditions. The bounds are asymptotically optimal. Results on the zero-free region are", "start_char_pos": 880, "end_char_pos": 880}, {"type": "D", "before": "bounds are asymptotically optimal. The", "after": null, "start_char_pos": 992, "end_char_pos": 1030}, {"type": "R", "before": "pairwise interaction", "after": "single interaction of sets of vertices", "start_char_pos": 1131, "end_char_pos": 1151}], "sents_char_pos": [0, 273, 520, 987, 1026]} {"doc_id": "2005.11949", "revision_depth": "1", "before_revision": "We construct deep ReLU neural networks to approximate functions in dilated shift-invariant spaces generated by a continuous function with compact support and study the approximation rates with respect to the number of neurons . The network construction is based on the bit extraction and data fitting capacity of deep neural networks. Combining with existing results of approximation from shift-invariant spaces, we are able to estimate the approximation rates of classical function spaces such as Sobolev spaces and Besov spaces . We also give lower bounds of the L^p ( 0, 1 ^d \\le \\le ) approximation error for Sobolev spaces, which show that our construction is asymptotically optimal up to a logarithm factor.", "after_revision": "We study the expressive power of deep ReLU neural networks for approximating functions in dilated shift-invariant spaces , which are widely used in signal processing, image processing, communications and so on. Approximation error bounds are estimated with respect to the width and depth of neural networks . The network construction is based on the bit extraction and data-fitting capacity of deep neural networks. As applications of our main results, the approximation rates of classical function spaces such as Sobolev spaces and Besov spaces are obtained . We also give lower bounds of the L^p ( 1 \\le p\\le \\infty ) approximation error for Sobolev spaces, which show that our construction of neural network is asymptotically optimal up to a logarithmic factor.", "edit_actions": [{"type": "R", "before": "construct", "after": "study the expressive power of", "start_char_pos": 3, "end_char_pos": 12}, {"type": "R", "before": "to approximate", "after": "for approximating", "start_char_pos": 39, "end_char_pos": 53}, {"type": "R", "before": "generated by a continuous function with compact support and study the approximation rates", "after": ", which are widely used in signal processing, image processing, communications and so on. Approximation error bounds are estimated", "start_char_pos": 98, "end_char_pos": 187}, {"type": "R", "before": "number of neurons", "after": "width and depth of neural networks", "start_char_pos": 208, "end_char_pos": 225}, {"type": "R", "before": "data fitting", "after": "data-fitting", "start_char_pos": 288, "end_char_pos": 300}, {"type": "R", "before": "Combining with existing results of approximation from shift-invariant spaces, we are able to estimate", "after": "As applications of our main results,", "start_char_pos": 335, "end_char_pos": 436}, {"type": "A", "before": null, "after": "are obtained", "start_char_pos": 530, "end_char_pos": 530}, {"type": "D", "before": "0,", "after": null, "start_char_pos": 572, "end_char_pos": 574}, {"type": "D", "before": "^d", "after": null, "start_char_pos": 577, "end_char_pos": 579}, {"type": "A", "before": null, "after": "p", "start_char_pos": 584, "end_char_pos": 584}, {"type": "A", "before": null, "after": "\\infty", "start_char_pos": 588, "end_char_pos": 588}, {"type": "A", "before": null, "after": "of neural network", "start_char_pos": 664, "end_char_pos": 664}, {"type": "R", "before": "logarithm", "after": "logarithmic", "start_char_pos": 699, "end_char_pos": 708}], "sents_char_pos": [0, 227, 334, 532]} {"doc_id": "2005.12668", "revision_depth": "1", "before_revision": "The COVID-19 pandemic has sparked unprecedented mobilization of scientists, already generating thousands of new papers that join a litany of previous biomedical work in related areas. This deluge of information makes it hard for researchers to keep track of their own research area, let alone explore new directions. Standard search engines are designed primarily for targeted search and are not geared for discovery or making connections that are not obvious from reading individual papers . In this paper, we present our ongoing work on SciSight, a novel framework for exploratory search of COVID-19 research . Based on formative interviews with scientists and a review of existing tools, we build and integrate two key capabilities: first, exploring interactions between biomedical facets (e.g., proteins, genes, drugs, diseases, patient characteristics); and second, discovering groups of researchers and how they are connected. We extract entities using a language model pre-trained on several biomedical information extraction tasks, and enrich them with data from the Microsoft Academic Graph (MAG). To find research groups automatically, we use hierarchical clustering with overlap to allow authors, as they do, to belong to multiple groups. Finally, we introduce a novel presentation of these groups based on both topical and social affinities, allowing users to drill down from groups to papers to associations between entities, and update query suggestions on the fly with the goal of facilitating exploratory navigation . SciSight has thus far served over 10K users with over 30K page views and 13 \\% returning users. Preliminary user interviews with biomedical researchers suggest that SciSight complements current approaches and helps find new and relevant knowledge.%DIFDELCMD < \\end{abstract} %DIFDELCMD < %%% \\\\%DIF > returns.", "after_revision": "The COVID-19 pandemic has sparked unprecedented mobilization of scientists, generating a deluge of papers that makes it hard for researchers to keep track and explore new directions. Search engines are designed for targeted queries, not for discovery of connections across a corpus . In this paper, we present SciSight, a system for exploratory search of COVID-19 research integrating two key capabilities: first, exploring associations between biomedical facets automatically extracted from papers (e.g., genes, drugs, diseases, patient outcomes); second, combining textual and network information to search and visualize groups of researchers and their ties . SciSight has so far served over 15K users with over 42K page views and 13 %DIFDELCMD < \\end{abstract} %DIFDELCMD < %%% \\\\%DIF > returns.", "edit_actions": [{"type": "R", "before": "already generating thousands of new papers that join a litany of previous biomedical work in related areas. This deluge of information", "after": "generating a deluge of papers that", "start_char_pos": 76, "end_char_pos": 210}, {"type": "R", "before": "of their own research area, let alone", "after": "and", "start_char_pos": 255, "end_char_pos": 292}, {"type": "R", "before": "Standard search", "after": "Search", "start_char_pos": 317, "end_char_pos": 332}, {"type": "R", "before": "primarily for targeted search and are not geared for discovery or making connections that are not obvious from reading individual papers", "after": "for targeted queries, not for discovery of connections across a corpus", "start_char_pos": 354, "end_char_pos": 490}, {"type": "D", "before": "our ongoing work on", "after": null, "start_char_pos": 519, "end_char_pos": 538}, {"type": "R", "before": "novel framework", "after": "system", "start_char_pos": 551, "end_char_pos": 566}, {"type": "R", "before": ". Based on formative interviews with scientists and a review of existing tools, we build and integrate", "after": "integrating", "start_char_pos": 611, "end_char_pos": 713}, {"type": "R", "before": "interactions", "after": "associations", "start_char_pos": 753, "end_char_pos": 765}, {"type": "A", "before": null, "after": "automatically extracted from papers", "start_char_pos": 792, "end_char_pos": 792}, {"type": "D", "before": "proteins,", "after": null, "start_char_pos": 800, "end_char_pos": 809}, {"type": "R", "before": "characteristics); and second, discovering groups of researchers and how they are connected. We extract entities using a language model pre-trained on several biomedical information extraction tasks, and enrich them with data from the Microsoft Academic Graph (MAG). To find research groups automatically, we use hierarchical clustering with overlap to allow authors, as they do, to belong to multiple groups. Finally, we introduce a novel presentation of these groups based on both topical and social affinities, allowing users to drill down from groups to papers to associations between entities, and update query suggestions on the fly with the goal of facilitating exploratory navigation", "after": "outcomes); second, combining textual and network information to search and visualize groups of researchers and their ties", "start_char_pos": 842, "end_char_pos": 1532}, {"type": "R", "before": "thus", "after": "so", "start_char_pos": 1548, "end_char_pos": 1552}, {"type": "R", "before": "10K", "after": "15K", "start_char_pos": 1569, "end_char_pos": 1572}, {"type": "R", "before": "30K", "after": "42K", "start_char_pos": 1589, "end_char_pos": 1592}, {"type": "D", "before": "\\% returning users. Preliminary user interviews with biomedical researchers suggest that SciSight complements current approaches and helps find new and relevant knowledge.", "after": null, "start_char_pos": 1611, "end_char_pos": 1782}], "sents_char_pos": [0, 183, 316, 492, 859, 933, 1107, 1250, 1630, 1782]} {"doc_id": "2005.12964", "revision_depth": "3", "before_revision": "Deep candidate generation (DCG) , which narrows down the enormous corpus to a few hundred candidate items via representation learning , is integral to industrial recommender systems. Standard approaches adopt maximum likelihood estimation (MLE) and rely on sampling to ensure scalability , which reduces DCG to a task similar to language modeling. However, live recommender systems face severe unfairness of exposure with a corpus several orders of magnitude larger than that of natural language, which implies that (1) MLE will preserve and even exacerbate the exposure bias in the long run , as it aims to faithfully fit the history records , and (2) suboptimal sampling and inadequate use of item features can lead to inferior representations for the items that are unfairly ignored . In this paper, we introduce CLRec, a Contrastive Learning paradigm successfully deployed in a real-world massive recommender system, for alleviating exposure unfairness in DCG. We theoretically prove that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity scoring, which complements previous understanding of contrastive learning. We further implement a good sampling distribution and reuse most of the computation when encoding rich features for both positive and negative items, by employing a fix-sized queue to store items (and reuse the computed representations ) from previous batches, where the queue serves as the negative sampler . Extensive offline analyses and four-month online A/B tests in Mobile Taobao demonstrate substantial improvement, including a dramatic reduction in the Matthew effect.", "after_revision": "Deep candidate generation (DCG) that narrows down the collection of relevant items from billions to hundreds via representation learning is essential to large-scale recommender systems. Standard approaches approximate maximum likelihood estimation (MLE) through sampling for better scalability and address the problem of DCG in a way similar to language modeling. However, live recommender systems face severe unfairness of exposure with a vocabulary several orders of magnitude larger than that of natural language, implying that (1) MLE will preserve and even exacerbate the exposure bias in the long run in order to faithfully fit the observed samples , and (2) suboptimal sampling and inadequate use of item features can lead to inferior representations for the unfairly ignored items . In this paper, we introduce CLRec, a Contrastive Learning paradigm that has been successfully deployed in a real-world massive recommender system, to alleviate exposure bias in DCG. We theoretically prove that a popular choice of contrastive loss is equivalently reducing the exposure bias via inverse propensity scoring, which provides a new perspective on the effectiveness of contrastive learning. We further employ a fix-sized queue to store the items' representations computed in previously processed batches, and use the queue to serve as an effective sampler of negative examples. This queue-based design provides great efficiency in incorporating rich features of the thousand negative items per batch thanks to computation reuse . Extensive offline analyses and four-month online A/B tests in Mobile Taobao demonstrate substantial improvement, including a dramatic reduction in the Matthew effect.", "edit_actions": [{"type": "R", "before": ", which", "after": "that", "start_char_pos": 32, "end_char_pos": 39}, {"type": "R", "before": "enormous corpus to a few hundred candidate items", "after": "collection of relevant items from billions to hundreds", "start_char_pos": 57, "end_char_pos": 105}, {"type": "R", "before": ", is integral to industrial", "after": "is essential to large-scale", "start_char_pos": 134, "end_char_pos": 161}, {"type": "R", "before": "adopt", "after": "approximate", "start_char_pos": 203, "end_char_pos": 208}, {"type": "R", "before": "and rely on sampling to ensure scalability , which reduces DCG to a task", "after": "through sampling for better scalability and address the problem of DCG in a way", "start_char_pos": 245, "end_char_pos": 317}, {"type": "R", "before": "corpus", "after": "vocabulary", "start_char_pos": 424, "end_char_pos": 430}, {"type": "R", "before": "which implies", "after": "implying", "start_char_pos": 497, "end_char_pos": 510}, {"type": "R", "before": ", as it aims", "after": "in order", "start_char_pos": 592, "end_char_pos": 604}, {"type": "R", "before": "history records", "after": "observed samples", "start_char_pos": 627, "end_char_pos": 642}, {"type": "R", "before": "items that are unfairly ignored", "after": "unfairly ignored items", "start_char_pos": 754, "end_char_pos": 785}, {"type": "A", "before": null, "after": "that has been", "start_char_pos": 855, "end_char_pos": 855}, {"type": "R", "before": "for alleviating exposure unfairness", "after": "to alleviate exposure bias", "start_char_pos": 922, "end_char_pos": 957}, {"type": "R", "before": "equivalent to", "after": "equivalently", "start_char_pos": 1034, "end_char_pos": 1047}, {"type": "R", "before": "complements previous understanding", "after": "provides a new perspective on the effectiveness", "start_char_pos": 1113, "end_char_pos": 1147}, {"type": "R", "before": "implement a good sampling distribution and reuse most of the computation when encoding rich features for both positive and negative items, by employing a", "after": "employ a", "start_char_pos": 1184, "end_char_pos": 1337}, {"type": "D", "before": "items (and reuse the computed representations ) from previous batches, where the queue serves as", "after": null, "start_char_pos": 1363, "end_char_pos": 1459}, {"type": "R", "before": "negative sampler", "after": "items' representations computed in previously processed batches, and use the queue to serve as an effective sampler of negative examples. This queue-based design provides great efficiency in incorporating rich features of the thousand negative items per batch thanks to computation reuse", "start_char_pos": 1464, "end_char_pos": 1480}], "sents_char_pos": [0, 182, 347, 787, 965, 1172]} {"doc_id": "2005.13650", "revision_depth": "1", "before_revision": "We iterate Dorfman's pool testing algorithm \\mbox{%DIFAUXCMD dorfman previous stage. We compute the mean and variance of the number of tests per individual as a function of the pool sizes m=(m_1,\\dots,m_k) in the first k stages; in the (k+1)-th stage all remaining individuals are tested. Denote by D_k(m,p) the mean number of tests per individual, which we will call the cost of the strategy m. The goal is to minimize D_k(m,p) and to find the optimizing values k and m. We show that the cost of the strategy (3^k,%DIFDELCMD < \\dots%%% ,3) with k\\approx \\log_3(1/p) is of order p\\log(1/p), and differs from the optimal cost by a fraction of this value. To prove this result we bound the difference between the cost of this strategy with the minimal cost when pool sizes take real values. We conjecture that the optimal strategy, depending on the value of p, is indeed of the form (3^k ,%DIFDELCMD < \\dots%%% , 3 ) or of the form (3 ^{k-1}4,3^{k-1} \\dots,3 ), with a precise description for k. This conjecture is supported by inspection of a family of values of p. Finally, we observe that for these values of p and the best strategy of the form (3^k,\\dots,3), \\big(\\big) the standard deviation of the number of tests per individual is of the same order as the cost . As an example, when p=0.02 , the optimal strategy is k=3, m=(27,9,3) . The cost of this strategy is 0.20 , that is, the mean number of tests required to screen 100 individuals is 20.", "after_revision": "In order to identify the infected individuals of a population, their samples are divided in equally sized groups called pools and a single laboratory test is applied to each pool. Individuals whose samples belong to pools that test negative are declared healthy, while each pool that tests positive is divided into smaller, equally sized pools which are tested in the next stage. This scheme is called adaptive, because the composition of the pools at each stage depends on results from previous stages, and nested because each pool is a subset of a pool of the previous stage. Is the infection probability p is not smaller than 1-3^{-1/3 compute the mean D_k(m,p) and the variance of the number of tests per individual as a function of the pool sizes m=(m_1,\\dots,m_k) in the first k stages; in the (k+1)-th stage all remaining samples are tested. The case k=1 was proposed by Dorfman in his seminal paper in 1943. The goal is to minimize D_k(m,p) , which is called the cost associated to~ m. We show that %DIFDELCMD < \\dots%%% for p\\in (0, 1-3^{-1/3 (3^k %DIFDELCMD < \\dots%%% \\text{ or 3 ^{k-1}4,3^{k-1} , \\dots,3 ^2,3 ), with a precise description of the range of p's where each holds. We then focus on schemes of the type (3^k,\\dots,3), and estimate that the cost of the best scheme of this type for p, determined by the choice of k=k_3(p), is of order O\\big(p\\log(1/p)\\big). This is the same order as that of the cost of the optimal scheme, and the difference of these costs is explicitly bounded . As an example, for p=0.02 the optimal choice is k=3, m=(27,9,3) , with cost 0.20 ; that is, the mean number of tests required to screen 100 individuals is 20.", "edit_actions": [{"type": "R", "before": "We iterate Dorfman's pool testing algorithm \\mbox{%DIFAUXCMD dorfman", "after": "In order to identify the infected individuals of a population, their samples are divided in equally sized groups called pools and a single laboratory test is applied to each pool. Individuals whose samples belong to pools that test negative are declared healthy, while each pool that tests positive is divided into smaller, equally sized pools which are tested in the next stage. This scheme is called adaptive, because the composition of the pools at each stage depends on results from previous stages, and nested because each pool is a subset of a pool of the", "start_char_pos": 0, "end_char_pos": 68}, {"type": "R", "before": "We", "after": "Is the infection probability p is not smaller than 1-3^{-1/3", "start_char_pos": 85, "end_char_pos": 87}, {"type": "R", "before": "and", "after": "D_k(m,p) and the", "start_char_pos": 105, "end_char_pos": 108}, {"type": "R", "before": "individuals", "after": "samples", "start_char_pos": 265, "end_char_pos": 276}, {"type": "R", "before": "Denote by D_k(m,p) the mean number of tests per individual, which we will call the cost of the strategy m.", "after": "The case k=1 was proposed by Dorfman in his seminal paper in 1943.", "start_char_pos": 289, "end_char_pos": 395}, {"type": "R", "before": "and to find the optimizing values k and", "after": ", which is called the cost associated to~", "start_char_pos": 429, "end_char_pos": 468}, {"type": "D", "before": "the cost of the strategy (3^k,", "after": null, "start_char_pos": 485, "end_char_pos": 515}, {"type": "R", "before": ",3) with k\\approx \\log_3(1/p) is of order p\\log(1/p), and differs from the optimal cost by a fraction of this value. To prove this result we bound the difference between the cost of this strategy with the minimal cost when pool sizes take real values. We conjecture that the optimal strategy, depending on the value of p, is indeed of the form", "after": "for p\\in (0, 1-3^{-1/3", "start_char_pos": 537, "end_char_pos": 880}, {"type": "D", "before": ",", "after": null, "start_char_pos": 886, "end_char_pos": 887}, {"type": "R", "before": ",", "after": "\\text{ or", "start_char_pos": 909, "end_char_pos": 910}, {"type": "D", "before": ") or of the form (3", "after": null, "start_char_pos": 913, "end_char_pos": 932}, {"type": "A", "before": null, "after": ",", "start_char_pos": 949, "end_char_pos": 949}, {"type": "A", "before": null, "after": "^2,3", "start_char_pos": 958, "end_char_pos": 958}, {"type": "R", "before": "for k. This conjecture is supported by inspection of a family of values of p. Finally, we observe that for these values of p and the best strategy of the form", "after": "of the range of p's where each holds. We then focus on schemes of the type", "start_char_pos": 989, "end_char_pos": 1147}, {"type": "A", "before": null, "after": "and estimate that the cost of the best scheme of this type for p, determined by the choice of k=k_3(p), is of order O", "start_char_pos": 1163, "end_char_pos": 1163}, {"type": "A", "before": null, "after": "p\\log(1/p)", "start_char_pos": 1168, "end_char_pos": 1168}, {"type": "A", "before": null, "after": ". This is", "start_char_pos": 1173, "end_char_pos": 1173}, {"type": "D", "before": "standard deviation of the number of tests per individual is of the", "after": null, "start_char_pos": 1178, "end_char_pos": 1244}, {"type": "R", "before": "the cost", "after": "that of the cost of the optimal scheme, and the difference of these costs is explicitly bounded", "start_char_pos": 1259, "end_char_pos": 1267}, {"type": "R", "before": "when", "after": "for", "start_char_pos": 1285, "end_char_pos": 1289}, {"type": "R", "before": ", the optimal strategy", "after": "the optimal choice", "start_char_pos": 1297, "end_char_pos": 1319}, {"type": "R", "before": ". The cost of this strategy is", "after": ", with cost", "start_char_pos": 1339, "end_char_pos": 1369}, {"type": "R", "before": ",", "after": ";", "start_char_pos": 1375, "end_char_pos": 1376}], "sents_char_pos": [0, 84, 228, 288, 395, 471, 653, 788, 995, 1269, 1340]} {"doc_id": "2006.00686", "revision_depth": "1", "before_revision": "The X-ray transform models a forward projection operator of image formation, which has been widely used for tomographic image reconstruction. We propose a new algorithm to compute that transform of an image represented by unit (pixel/voxel) basis functions , for various two-dimensional (2D)/three-dimensional (3D) scanning geometries, such as 2D/3D parallel beam, 2D fan beam, 3D circular/helical cone beam, etc. Since the transform is acting as a line integral, the fundamental task is to calculate this integral of the unit basis functions, which is equivalently the intersection length of the ray with the associated unit. For a given ray, using the support of unit basis function, we derive the sufficient and necessary condition of non-vanishing intersectability , and obtain the analytic formula for the intersection length, which can be used to distinguish the units that produce valid intersections with the given ray, and then perform simple calculations only for those units . The algorithm is easy to be implemented, and the computational cost is optimal. Moreover, we discuss the intrinsic ambiguities of the problem itself that perhaps happen , and present a solution. The algorithm not only possesses the adaptability with regard to the center position, scale and size of the image, and the scanning geometry as well, but also is quite suited to parallelize with optimality. The resulting projection matrix can be sparsely stored and output if needed, and the adjoint of X-ray transform can be also computed by the algorithm . Finally, we validate the correctness of the algorithm by the aforementioned scanning geometries.", "after_revision": " We propose a new algorithm to compute the X-ray transform of an image represented by unit (pixel/voxel) basis functions . The fundamental issue is equivalently calculating the intersection lengths of the ray with associated units. For any given ray, we first derive the sufficient and necessary condition for non-vanishing intersectability . By this condition, we then distinguish the units that produce valid intersections with the ray. Only for those units rather than all the individuals, we calculate the intersection lengths by the obtained analytic formula. The proposed algorithm is adapted to 2D/3D parallel beam and 2D fan beam. Particularly, we derive the transformation formulas and generalize the algorithm to 3D circular and helical cone beams . Moreover, we discuss the intrinsic ambiguities of the problem itself , and present a solution. The algorithm not only possesses the adaptability with regard to the center position, scale and size of the image, but also is suited to parallelize with optimality. The comparison study demonstrates the proposed algorithm is fast, more complete, and is more flexible with respect to different scanning geometries and different basis functions . Finally, we validate the correctness of the algorithm by the aforementioned scanning geometries.", "edit_actions": [{"type": "D", "before": "The X-ray transform models a forward projection operator of image formation, which has been widely used for tomographic image reconstruction.", "after": null, "start_char_pos": 0, "end_char_pos": 141}, {"type": "R", "before": "that", "after": "the X-ray", "start_char_pos": 180, "end_char_pos": 184}, {"type": "R", "before": ", for various two-dimensional (2D)/three-dimensional (3D) scanning geometries, such as 2D/3D parallel beam, 2D fan beam, 3D circular/helical cone beam, etc. Since the transform is acting as a line integral, the fundamental task is to calculate this integral of the unit basis functions, which is equivalently the intersection length", "after": ". The fundamental issue is equivalently calculating the intersection lengths", "start_char_pos": 257, "end_char_pos": 589}, {"type": "R", "before": "the associated unit. For a", "after": "associated units. For any", "start_char_pos": 606, "end_char_pos": 632}, {"type": "R", "before": "using the support of unit basis function, we", "after": "we first", "start_char_pos": 644, "end_char_pos": 688}, {"type": "R", "before": "of", "after": "for", "start_char_pos": 735, "end_char_pos": 737}, {"type": "R", "before": ", and obtain the analytic formula for the intersection length, which can be used to", "after": ". By this condition, we then", "start_char_pos": 769, "end_char_pos": 852}, {"type": "R", "before": "given ray, and then perform simple calculations only", "after": "ray. Only", "start_char_pos": 917, "end_char_pos": 969}, {"type": "A", "before": null, "after": "rather than all the individuals, we calculate the intersection lengths by the obtained analytic formula. The proposed algorithm is adapted to 2D/3D parallel beam and 2D fan beam. Particularly, we derive the transformation formulas and generalize the algorithm to 3D circular and helical cone beams", "start_char_pos": 986, "end_char_pos": 986}, {"type": "D", "before": "The algorithm is easy to be implemented, and the computational cost is optimal.", "after": null, "start_char_pos": 989, "end_char_pos": 1068}, {"type": "D", "before": "that perhaps happen", "after": null, "start_char_pos": 1138, "end_char_pos": 1157}, {"type": "D", "before": "and the scanning geometry as well,", "after": null, "start_char_pos": 1299, "end_char_pos": 1333}, {"type": "D", "before": "quite", "after": null, "start_char_pos": 1346, "end_char_pos": 1351}, {"type": "R", "before": "resulting projection matrix can be sparsely stored and output if needed, and the adjoint of X-ray transform can be also computed by the algorithm", "after": "comparison study demonstrates the proposed algorithm is fast, more complete, and is more flexible with respect to different scanning geometries and different basis functions", "start_char_pos": 1395, "end_char_pos": 1540}], "sents_char_pos": [0, 141, 413, 626, 988, 1068, 1183, 1390, 1542]} {"doc_id": "2006.01215", "revision_depth": "1", "before_revision": "For general complex or real 1-parameter matrix flow A(t)_{n,n} \\CC this paper considers ways to decompose flows globally via one constant matrix C_{n,n} as A(t) = C ^{-1} \\cdot diag(A_1(t), ..., A_\\ell(t)) \\cdot C \\cdot diag(A_1,...,A_\\ell)\\cdot C } with each diagonal blockA _k(t) square and the number of blocks \\ell > 1 if possible. The theory behind our algorithm is elementary and uses the concept of invariant subspaces for the Matlab {\\tt eig} computed 'eigenvectors' of one flow matrix A (t_a) to find the coarsest simultaneous block structure for all flow matrices A (t_b). The method works very efficiently for all matrix flows, be they differentiable, continuous or discontinuous in t, and for all types of square matrix flows such as hermitean, real symmetric, normal or general complex and real flows A(t) , with or without Jordan block structures and with or without repeated eigenvalues. Our intended aim is to discover decomposable flows as they originate in sensor given outputs for time-varying matrix problems and thereby reduce the complexities of their numerical treatment .", "after_revision": "For general complex or real 1-parameter matrix flows A(t)_{n,n} and for time-invariant static matrices A \\in\\CC_{n,n this paper considers ways to decompose matrix flows and single matrices globally via one constant matrix similarity C_{n,n} as A(t) = C ^{-1} \\cdot diag(A_1(t), ..., A_\\ell(t)) \\cdot C or A = C^{-1\\cdot diag(A_1,...,A_\\ell)\\cdot C } with each diagonal block A _k(t) or A_k square and their number \\ell > 1 if this is possible. The theory behind our proposed algorithm is elementary and uses the concept of invariant subspaces for the Matlab {\\tt eig} computed 'eigenvectors' of one associated flow matrix B (t_a) to find the coarsest simultaneous block structure for all flow matrices B (t_b). The method works very efficiently for all time-varying matrix flows, be they differentiable, continuous or discontinuous in t, and for all fixed entry matrices A; as well as for all types of square matrix flows or fixed entry matrices such as hermitean, real symmetric, normal or general complex and real flows A(t) or static matrices A , with or without Jordan block structures and with or without repeated eigenvalues. Our intended aim is to discover diagonal-block decomposable flows as they originate in sensor driven outputs for time-varying matrix problems and thereby help to reduce the complexities of their numerical treatments through adapting 'divide and conquer' methods for their diagonal sub-blocks. Our method is also applicable to standard fixed entry matrices of all structures and types. In the process we discover and study k-normal fixed entry matrix classes that can be decomposed under unitary similarities into various k-dimensional block-diagonal forms .", "edit_actions": [{"type": "R", "before": "flow", "after": "flows", "start_char_pos": 47, "end_char_pos": 51}, {"type": "A", "before": null, "after": "and for time-invariant static matrices A \\in", "start_char_pos": 63, "end_char_pos": 63}, {"type": "A", "before": null, "after": "_{n,n", "start_char_pos": 66, "end_char_pos": 66}, {"type": "R", "before": "flows", "after": "matrix flows and single matrices", "start_char_pos": 106, "end_char_pos": 111}, {"type": "A", "before": null, "after": "similarity", "start_char_pos": 145, "end_char_pos": 145}, {"type": "A", "before": null, "after": "or A = C^{-1", "start_char_pos": 215, "end_char_pos": 215}, {"type": "R", "before": "blockA", "after": "block A", "start_char_pos": 270, "end_char_pos": 276}, {"type": "R", "before": "square and the number of blocks", "after": "or A_k square and their number", "start_char_pos": 283, "end_char_pos": 314}, {"type": "A", "before": null, "after": "this is", "start_char_pos": 327, "end_char_pos": 327}, {"type": "A", "before": null, "after": "proposed", "start_char_pos": 360, "end_char_pos": 360}, {"type": "R", "before": "flow matrix A", "after": "associated flow matrix B", "start_char_pos": 485, "end_char_pos": 498}, {"type": "R", "before": "A", "after": "B", "start_char_pos": 577, "end_char_pos": 578}, {"type": "A", "before": null, "after": "time-varying", "start_char_pos": 628, "end_char_pos": 628}, {"type": "A", "before": null, "after": "fixed entry matrices A; as well as for all", "start_char_pos": 713, "end_char_pos": 713}, {"type": "A", "before": null, "after": "or fixed entry matrices", "start_char_pos": 743, "end_char_pos": 743}, {"type": "A", "before": null, "after": "or static matrices A", "start_char_pos": 825, "end_char_pos": 825}, {"type": "A", "before": null, "after": "diagonal-block", "start_char_pos": 942, "end_char_pos": 942}, {"type": "R", "before": "given", "after": "driven", "start_char_pos": 990, "end_char_pos": 995}, {"type": "A", "before": null, "after": "help to", "start_char_pos": 1049, "end_char_pos": 1049}, {"type": "R", "before": "treatment", "after": "treatments through adapting 'divide and conquer' methods for their diagonal sub-blocks. Our method is also applicable to standard fixed entry matrices of all structures and types. In the process we discover and study k-normal fixed entry matrix classes that can be decomposed under unitary similarities into various k-dimensional block-diagonal forms", "start_char_pos": 1093, "end_char_pos": 1102}], "sents_char_pos": [0, 337, 585, 909]} {"doc_id": "2006.01911", "revision_depth": "1", "before_revision": "We price European-style options written on forward contracts in a commodity market, which we model with a state-dependent infinite-dimensional Heath-Jarrow-Morton (HJM) approach. We introduce a new class of volatility operators which map the square integrable noise into the Filipovi \\'{c space of forward curves , and we specify a deterministic parametrized version of it. For calibration purposes, we train a neural network to approximate the option price as a function of the model parameters. We then use it to calibrate the HJM parameters starting from (simulated) option market data. Finally we introduce a new loss function that takes into account bid and ask prices and offers a solution to calibration in illiquid markets . A key issue discovered is that the trained neural network might be non-injective, which could potentially lead to poor accuracy in calibrating the forward curve parameters, even when showing a high degree of accuracy in recovering the prices . This reveals that the original meaning of the parameters gets somehow lost in the approximation .", "after_revision": "We price European-style options written on forward contracts in a commodity market, which we model with an infinite-dimensional Heath-Jarrow-Morton (HJM) approach. For this purpose we introduce a new class of state-dependent volatility operators that map the square integrable noise into the Filipovi \\'{c space of forward curves . For calibration, we specify a fully parametrized version of our model and train a neural network to approximate the true option price as a function of the model parameters. This neural network can then be used to calibrate the HJM parameters based on observed option prices. We conduct a numerical case study based on artificially generated option prices in a deterministic volatility setting. In this setting we derive closed pricing formulas, allowing us to benchmark the neural network based calibration approach. We also study calibration in illiquid markets with a large bid-ask spread. The experiments reveal a high degree of accuracy in recovering the prices after calibration, even if the original meaning of the model parameters is partly lost in the approximation step .", "edit_actions": [{"type": "R", "before": "a state-dependent", "after": "an", "start_char_pos": 104, "end_char_pos": 121}, {"type": "R", "before": "We", "after": "For this purpose we", "start_char_pos": 179, "end_char_pos": 181}, {"type": "R", "before": "volatility operators which", "after": "state-dependent volatility operators that", "start_char_pos": 207, "end_char_pos": 233}, {"type": "R", "before": "\\'{c", "after": "\\'{c", "start_char_pos": 284, "end_char_pos": 288}, {"type": "R", "before": ", and", "after": ". For calibration,", "start_char_pos": 313, "end_char_pos": 318}, {"type": "R", "before": "deterministic", "after": "fully", "start_char_pos": 332, "end_char_pos": 345}, {"type": "R", "before": "it. For calibration purposes, we", "after": "our model and", "start_char_pos": 370, "end_char_pos": 402}, {"type": "A", "before": null, "after": "true", "start_char_pos": 445, "end_char_pos": 445}, {"type": "R", "before": "We then use it", "after": "This neural network can then be used", "start_char_pos": 498, "end_char_pos": 512}, {"type": "R", "before": "starting from (simulated) option market data. Finally we introduce a new loss function that takes into account bid and ask prices and offers a solution to calibration", "after": "based on observed option prices. We conduct a numerical case study based on artificially generated option prices in a deterministic volatility setting. In this setting we derive closed pricing formulas, allowing us to benchmark the neural network based calibration approach. We also study calibration", "start_char_pos": 545, "end_char_pos": 711}, {"type": "R", "before": ". A key issue discovered is that the trained neural network might be non-injective, which could potentially lead to poor accuracy in calibrating the forward curve parameters, even when showing", "after": "with a large bid-ask spread. The experiments reveal", "start_char_pos": 732, "end_char_pos": 924}, {"type": "R", "before": ". This reveals that", "after": "after calibration, even if", "start_char_pos": 976, "end_char_pos": 995}, {"type": "R", "before": "parameters gets somehow", "after": "model parameters is partly", "start_char_pos": 1024, "end_char_pos": 1047}, {"type": "A", "before": null, "after": "step", "start_char_pos": 1074, "end_char_pos": 1074}], "sents_char_pos": [0, 178, 373, 497, 590, 977]} {"doc_id": "2006.02141", "revision_depth": "1", "before_revision": "Photoconductive devices (PCDs) enhanced with nanostructures have shown a significantly improved optical-to-terahertz conversion efficiency. While the experimental research on the development of such devices has progressed remarkably, simulation of these devices is still challenging due to the high computational cost resulting from modeling and discretization of complicated physical processes and intricate geometries. In this work, a discontinuous Galerkin (DG) method-based unit-cell scheme for efficient simulation of PCDs with periodic nanostructures is proposed. The scheme considers two physical stages of the device and model them using two coupled systems, i.e., a Poisson-drift-diffusion (DD) system describing the nonequilibrium steady state, and a Maxwell-DD system describing the transient stage. A \"potential-drop\" boundary condition is enforced on the opposing boundaries of the unit cell to mimic the effect of the bias voltage. Periodic boundary conditions are used for carrier densities and electromagnetic fields. The unit-cell model composed of these coupled equations and boundary conditions is discretized and solved using DG methods. The boundary conditions are enforced weakly through the numerical flux of DG. Numerical results show that the proposed DG-based unit-cell scheme models the device accurately but is significantly faster than the DG scheme that takes into account the whole device .", "after_revision": "Photoconductive devices (PCDs) enhanced with nanostructures have a significantly improved optical-to-terahertz conversion efficiency. While the experimental research on the development of these devices has progressed remarkably, their simulation is still challenging due to the need for accurate and efficient modeling of multiphysics processes and intricate device geometries. In this work, a discontinuous Galerkin (DG) method-based unit-cell scheme for efficient simulation of PCDs with periodic nanostructures is proposed. The scheme considers two physical stages of the device and models them using two coupled systems, i.e., a system of Poisson and drift-diffusion equations describing the nonequilibrium steady state, and a system of Maxwell and drift-diffusion equations describing the transient stage. A \"potential-drop\" boundary condition is enforced on the opposing boundaries of the unit cell to mimic the effect of the bias voltage. Periodic boundary conditions are used for carrier densities and electromagnetic fields. The unit-cell model described by these coupled equations and boundary conditions is discretized using DG methods. The resulting DG-based unit-cell scheme is significantly faster than the DG scheme that takes into account the whole device . Additionally, the proposed scheme is used for the first-ever numerical demonstration of optical- and radiation-field screening effects on PCD response. The optical-field screening is found to play a more dominant role in the saturation of PCD output at high levels of optical pump power .", "edit_actions": [{"type": "D", "before": "shown", "after": null, "start_char_pos": 65, "end_char_pos": 70}, {"type": "R", "before": "such", "after": "these", "start_char_pos": 194, "end_char_pos": 198}, {"type": "R", "before": "simulation of these devices", "after": "their simulation", "start_char_pos": 234, "end_char_pos": 261}, {"type": "R", "before": "high computational cost resulting from modeling and discretization of complicated physical", "after": "need for accurate and efficient modeling of multiphysics", "start_char_pos": 294, "end_char_pos": 384}, {"type": "A", "before": null, "after": "device", "start_char_pos": 409, "end_char_pos": 409}, {"type": "R", "before": "model", "after": "models", "start_char_pos": 630, "end_char_pos": 635}, {"type": "R", "before": "Poisson-drift-diffusion (DD) system", "after": "system of Poisson and drift-diffusion equations", "start_char_pos": 676, "end_char_pos": 711}, {"type": "R", "before": "Maxwell-DD system", "after": "system of Maxwell and drift-diffusion equations", "start_char_pos": 762, "end_char_pos": 779}, {"type": "R", "before": "composed of", "after": "described by", "start_char_pos": 1055, "end_char_pos": 1066}, {"type": "D", "before": "and solved", "after": null, "start_char_pos": 1130, "end_char_pos": 1140}, {"type": "R", "before": "boundary conditions are enforced weakly through the numerical flux of DG. Numerical results show that the proposed", "after": "resulting", "start_char_pos": 1163, "end_char_pos": 1277}, {"type": "D", "before": "models the device accurately but", "after": null, "start_char_pos": 1304, "end_char_pos": 1336}, {"type": "A", "before": null, "after": ". Additionally, the proposed scheme is used for the first-ever numerical demonstration of optical- and radiation-field screening effects on PCD response. The optical-field screening is found to play a more dominant role in the saturation of PCD output at high levels of optical pump power", "start_char_pos": 1421, "end_char_pos": 1421}], "sents_char_pos": [0, 139, 421, 570, 811, 946, 1034, 1158]} {"doc_id": "2006.03141", "revision_depth": "1", "before_revision": "We describe in this report our studies to understand the relationship between human mobility and the spreading of COVID-19 , as an aid to manage the restart of the social and economic activities after the lockdown and monitor the epidemics in the coming weeks and months. We compare the evolution (from January to May 2020 ) of the daily mobilityflows in Italy, measured by means of nation-wide mobile phone data, and the evolution of transmissibility, measured by the net reproduction number, i.e., the mean number of secondary infections generated by one primary infector in the presence of control interventions and human behavioural adaptations. We find a striking relationship between the negative variation of mobility flows and the net reproduction number , in all Italian regions, between March 11th and March 18th, when the country entered the lockdown. This observation allows us to quantify the time needed to \"switch off \" the country mobility (one week) and the time required to bring the net reproduction number below 1 (one week). A reasonably simple regression model provides evidence that the net reproduction number is correlated with a region's incoming, outgoing and internal mobility. We also find a strong relationship between the number of days above the epidemic threshold before the mobility flows reduce significantly as an effect of lockdowns, and the total number of confirmed SARS-CoV-2 infections per 100k inhabitants , thus indirectly showing the effectiveness of the lockdown and the other non-pharmaceutical interventions in the containment of the contagion . Our study demonstrates the value of \"big \" mobility data to the monitoring of key epidemic indicators to inform choices as the epidemics unfolds in the coming months .", "after_revision": "In 2020, countries affected by the COVID-19 pandemic implemented various non-pharmaceutical interventions to contrast the spread of the virus and its impact on their healthcare systems and economies. Using Italian data at different geographic scales, we investigate the relationship between human mobility, which subsumes many facets of the population's response to the changing situation, and the spread of COVID-19. Leveraging mobile phone data from February through September 2020 , we find a striking relationship between the decrease in mobility flows and the net reproduction number . We find that the time needed to switch off mobility and bring the net reproduction number below the critical threshold of 1 is about one week. Moreover, we observe a strong relationship between the number of days spent above such threshold before the lockdown-induced drop in mobility flows and the total number of infections per 100k inhabitants . Estimating the statistical effect of mobility flows on the net reproduction number over time, we document a 2-week lag positive association, strong in March and April, and weaker but still significant in June . Our study demonstrates the value of big mobility data to monitor the epidemic and inform control interventions during its unfolding .", "edit_actions": [{"type": "R", "before": "We describe in this report our studies to understand the relationship between human mobility and the spreading of", "after": "In 2020, countries affected by the", "start_char_pos": 0, "end_char_pos": 113}, {"type": "R", "before": ", as an aid to manage the restart of the social and economic activities after the lockdown and monitor the epidemics in the coming weeks and months. We compare the evolution (from January to May", "after": "pandemic implemented various non-pharmaceutical interventions to contrast the spread of the virus and its impact on their healthcare systems and economies. Using Italian data at different geographic scales, we investigate the relationship between human mobility, which subsumes many facets of the population's response to the changing situation, and the spread of COVID-19. Leveraging mobile phone data from February through September", "start_char_pos": 123, "end_char_pos": 317}, {"type": "R", "before": ") of the daily mobilityflows in Italy, measured by means of nation-wide mobile phone data, and the evolution of transmissibility, measured by the net reproduction number, i.e., the mean number of secondary infections generated by one primary infector in the presence of control interventions and human behavioural adaptations. We", "after": ", we", "start_char_pos": 323, "end_char_pos": 652}, {"type": "R", "before": "negative variation of", "after": "decrease in", "start_char_pos": 694, "end_char_pos": 715}, {"type": "R", "before": ", in all Italian regions, between March 11th and March 18th, when the country entered the lockdown. This observation allows us to quantify", "after": ". We find that", "start_char_pos": 763, "end_char_pos": 901}, {"type": "R", "before": "\"switch off \" the country mobility (one week) and the time required to", "after": "switch off mobility and", "start_char_pos": 921, "end_char_pos": 991}, {"type": "A", "before": null, "after": "the critical threshold of", "start_char_pos": 1032, "end_char_pos": 1032}, {"type": "R", "before": "(one week). A reasonably simple regression model provides evidence that the net reproduction number is correlated with a region's incoming, outgoing and internal mobility. We also find", "after": "is about one week. Moreover, we observe", "start_char_pos": 1035, "end_char_pos": 1219}, {"type": "R", "before": "above the epidemic", "after": "spent above such", "start_char_pos": 1269, "end_char_pos": 1287}, {"type": "R", "before": "mobility flows reduce significantly as an effect of lockdowns,", "after": "lockdown-induced drop in mobility flows", "start_char_pos": 1309, "end_char_pos": 1371}, {"type": "D", "before": "confirmed SARS-CoV-2", "after": null, "start_char_pos": 1396, "end_char_pos": 1416}, {"type": "R", "before": ", thus indirectly showing the effectiveness of the lockdown and the other non-pharmaceutical interventions in the containment of the contagion", "after": ". Estimating the statistical effect of mobility flows on the net reproduction number over time, we document a 2-week lag positive association, strong in March and April, and weaker but still significant in June", "start_char_pos": 1449, "end_char_pos": 1591}, {"type": "R", "before": "\"big \"", "after": "big", "start_char_pos": 1630, "end_char_pos": 1636}, {"type": "R", "before": "the monitoring of key epidemic indicators to inform choices as the epidemics unfolds in the coming months", "after": "monitor the epidemic and inform control interventions during its unfolding", "start_char_pos": 1654, "end_char_pos": 1759}], "sents_char_pos": [0, 271, 649, 862, 1046, 1206, 1593]} {"doc_id": "2006.03840", "revision_depth": "1", "before_revision": " 3D Morphable Models (3DMMs) are powerful statistical tools for representing and modeling 3D faces . To build a 3DMM, a training set of fully registered face scans is required, and its modeling capabilities directly depend on the variability contained in the training data. Thus, accurately establishing a dense point-to-point correspondence across heterogeneous scans with sufficient diversity in terms of identities, ethnicities, or expressions becomes essential. In this manuscript, we present an approach that leverages a 3DMM to transfer its dense semantic annotation across a large set of heterogeneous 3D faces , establishing a dense correspondence between them. To this aim, we propose a novel formulation to learn a set of sparse deformation components with local support on the face that, together with an original non-rigid deformation algorithm, allow precisely fitting the 3DMM to arbitrary faces and transfer its semantic annotation . We experimented our approach on three large and diverse datasets, showing it can effectively generalize to very different samples and accurately establish a dense correspondence even in presence of complex facial expressions or unseen deformations. As main outcome of this work, we build a heterogeneous, large-scale 3DMM from more than 9,000 fully registered scans obtained joining the three datasets together .", "after_revision": "The 3D Morphable Model (3DMM) is a powerful statistical tool for representing 3D face shapes . To build a 3DMM, a training set of scans in full point-to-point correspondence is required, and its modeling capabilities directly depend on the variability of the training data. Hence, to increase the descriptive power of a 3DMM, accurately establishing dense correspondence across heterogeneous scans with sufficient diversity in terms of identities, ethnicities, or expressions becomes essential. In this manuscript, we present a fully automatic approach that leverages a 3DMM to establish a dense correspondence across raw 3D faces . We propose a novel formulation to learn a set of sparse deformation components with local support on the face that, together with an original non-rigid deformation algorithm, allow the 3DMM to precisely fit unseen faces and transfer its semantic annotation to arbitrary 3D faces . We experimented our approach on three large and diverse datasets, showing it can effectively generalize to very different samples and accurately establish a dense correspondence even in presence of complex facial expressions . The accuracy of the dense registration is demonstrated by building a heterogeneous, large-scale 3DMM from more than 9,000 fully registered scans obtained by joining the three datasets .", "edit_actions": [{"type": "A", "before": null, "after": "The", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "Models (3DMMs) are powerful statistical tools for representing and modeling", "after": "Model (3DMM) is a powerful statistical tool for representing", "start_char_pos": 14, "end_char_pos": 89}, {"type": "R", "before": "faces", "after": "face shapes", "start_char_pos": 93, "end_char_pos": 98}, {"type": "R", "before": "fully registered face scans", "after": "scans in full point-to-point correspondence", "start_char_pos": 136, "end_char_pos": 163}, {"type": "R", "before": "contained in", "after": "of", "start_char_pos": 242, "end_char_pos": 254}, {"type": "R", "before": "Thus, accurately establishing a dense point-to-point", "after": "Hence, to increase the descriptive power of a 3DMM, accurately establishing dense", "start_char_pos": 274, "end_char_pos": 326}, {"type": "R", "before": "an", "after": "a fully automatic", "start_char_pos": 497, "end_char_pos": 499}, {"type": "R", "before": "transfer its dense semantic annotation across a large set of heterogeneous", "after": "establish a dense correspondence across raw", "start_char_pos": 534, "end_char_pos": 608}, {"type": "R", "before": ", establishing a dense correspondence between them. To this aim, we", "after": ". We", "start_char_pos": 618, "end_char_pos": 685}, {"type": "D", "before": "precisely fitting", "after": null, "start_char_pos": 864, "end_char_pos": 881}, {"type": "R", "before": "arbitrary", "after": "precisely fit unseen", "start_char_pos": 894, "end_char_pos": 903}, {"type": "A", "before": null, "after": "to arbitrary 3D faces", "start_char_pos": 947, "end_char_pos": 947}, {"type": "R", "before": "or unseen deformations. As main outcome of this work, we build", "after": ". The accuracy of the dense registration is demonstrated by building", "start_char_pos": 1175, "end_char_pos": 1237}, {"type": "A", "before": null, "after": "by", "start_char_pos": 1325, "end_char_pos": 1325}, {"type": "D", "before": "together", "after": null, "start_char_pos": 1353, "end_char_pos": 1361}], "sents_char_pos": [0, 100, 273, 465, 669, 949, 1198]} {"doc_id": "2006.04816", "revision_depth": "1", "before_revision": "The COVID-19 pandemic has deeply impacted people's lives around the globe. During the extended lockdowns caused by the pandemic, online communities are crucial for people to access information and share experiences . In particular, two \"new\" communities have emerged on Reddit: /r/China _flu and /r/Coronavirus . By studying activities and users in these two communities, we provide a characterization of people's responses to COVID-19 on Reddit . First, we find that user activity peaks around March 17, when the World Health Organization (WHO) announced COVID-19 as a pandemic. Shortly after that, the activity levels of both communities have been declining week by week. We further illustrate the central role of these two communities in the emergence of COVID-related communities. Second, we study the differences between these two communities. /r/ Coronavirus is recommended as the official community for COVID-19 on Reddit, while /r / China _flu adopts a loose moderation practice. As a result, we observe that these two communities are gradually growing apart and more extremism is being found in /r/China _flu. Finally, we examine the spillover effect of the COVID-19 pandemic on user activity across the entire Reddit platform. Our results show significant changes in user activities outside COVID-related communities . In subreddits related to finance, food, and countries / cities, user activity is recovering to the pre-pandemic level in late April and May as countries reopen, but subreddits related to travel and sports remain highly impacted and show lower activity levels than the pre-pandemic period. Our work highlights the strength of Reddit as a source for understanding public reactions to COVID-19 and the importance of content moderation on the Internet during a pandemic .", "after_revision": "As the COVID-19 pandemic is disrupting life worldwide, related online communities are popping up . In particular, two \"new\" communities , /r/China flu and /r/Coronavirus , emerged on Reddit and have been dedicated to COVID- related discussions from the very beginning of this pandemic. With /r/Coronavirus promoted as the official community on Reddit, it remains an open question how users choose between these two highly-related communities. In this paper, we characterize user trajectories in these two communities from the beginning of COVID-19 to the end of September 2020. We show that new users of /r/ China flu and /r /Coronavirus were similar from January to March. After that, their differences steadily increase, evidenced by both language distance and membership prediction, as the pandemic continues to unfold. Furthermore, users who started at / r/ China flu from January to March were more likely to leave, while those who started in later months tend to remain highly \"loyal\". To understand this difference, we develop a movement analysis framework to understand membership changes in these two communities and identify a significant proportion of /r/China flu members (around 50\\%) that moved to / r/Coronavirus in February. This movement turns out to be highly predictable based on other subreddits that users were previously active in. Our work demonstrates how two highly-related communities emerge and develop their own identity in a crisis, and highlights the important role of existing communities in understanding such an emergence .", "edit_actions": [{"type": "R", "before": "The", "after": "As the", "start_char_pos": 0, "end_char_pos": 3}, {"type": "R", "before": "has deeply impacted people's lives around the globe. During the extended lockdowns caused by the pandemic,", "after": "is disrupting life worldwide, related", "start_char_pos": 22, "end_char_pos": 128}, {"type": "R", "before": "crucial for people to access information and share experiences", "after": "popping up", "start_char_pos": 152, "end_char_pos": 214}, {"type": "R", "before": "have emerged on Reddit:", "after": ",", "start_char_pos": 254, "end_char_pos": 277}, {"type": "R", "before": "_flu", "after": "flu", "start_char_pos": 287, "end_char_pos": 291}, {"type": "R", "before": ". By studying activities and users in these two communities, we provide a characterization of people's responses to COVID-19 on Reddit . First, we find that user activity peaks around March 17, when the World Health Organization (WHO) announced COVID-19 as a pandemic. Shortly after that, the activity levels of both communities have been declining week by week. We further illustrate the central role of", "after": ", emerged on Reddit and have been dedicated to COVID- related discussions from the very beginning of this pandemic. With /r/Coronavirus promoted as the official community on Reddit, it remains an open question how users choose between these two highly-related communities. In this paper, we characterize user trajectories in", "start_char_pos": 311, "end_char_pos": 715}, {"type": "R", "before": "in the emergence of COVID-related communities. Second, we study the differences between these two communities.", "after": "from the beginning of COVID-19 to the end of September 2020. We show that new users of", "start_char_pos": 738, "end_char_pos": 848}, {"type": "R", "before": "Coronavirus is recommended as the official community for COVID-19 on Reddit, while", "after": "China flu and", "start_char_pos": 853, "end_char_pos": 935}, {"type": "A", "before": null, "after": "/Coronavirus were similar from January to March. After that, their differences steadily increase, evidenced by both language distance and membership prediction, as the pandemic continues to unfold. Furthermore, users who started at", "start_char_pos": 939, "end_char_pos": 939}, {"type": "A", "before": null, "after": "r/", "start_char_pos": 942, "end_char_pos": 942}, {"type": "R", "before": "_flu adopts a loose moderation practice. As a result, we observe that", "after": "flu from January to March were more likely to leave, while those who started in later months tend to remain highly \"loyal\". To understand this difference, we develop a movement analysis framework to understand membership changes in", "start_char_pos": 949, "end_char_pos": 1018}, {"type": "R", "before": "are gradually growing apart and more extremism is being found in", "after": "and identify a significant proportion of", "start_char_pos": 1041, "end_char_pos": 1105}, {"type": "R", "before": "_flu. Finally, we examine the spillover effect of the COVID-19 pandemic on user activity across the entire Reddit platform. Our results show significant changes in user activities outside COVID-related communities . In subreddits related to finance, food, and countries", "after": "flu members (around 50\\%) that moved to", "start_char_pos": 1115, "end_char_pos": 1384}, {"type": "R", "before": "cities, user activity is recovering to the pre-pandemic level in late April and May as countries reopen, but subreddits related to travel and sports remain highly impacted and show lower activity levels than the pre-pandemic period. Our work highlights the strength of Reddit as a source for understanding public reactions to COVID-19 and the importance of content moderation on the Internet during a pandemic", "after": "r/Coronavirus in February. This movement turns out to be highly predictable based on other subreddits that users were previously active in. Our work demonstrates how two highly-related communities emerge and develop their own identity in a crisis, and highlights the important role of existing communities in understanding such an emergence", "start_char_pos": 1387, "end_char_pos": 1796}], "sents_char_pos": [0, 74, 216, 312, 447, 579, 673, 784, 989, 1120, 1238, 1330, 1619]} {"doc_id": "2006.05104", "revision_depth": "2", "before_revision": "Although a significant number of compressed indexes for highly repetitive strings have been proposed thus far, developing compressed indexes that support faster queries remains a challenge. Run-length Burrows-Wheeler transform (RLBWT) is a lossless data compression by a reversible permutation of an input string and run-length encoding, and it has become a popular research topic in string processing. R-index Gagie et al., ACM'20%DIFDELCMD < ] %%% is an efficient compressed index on RLBWTwhose space usage depends not on string length but the number of runs in an RLBWT, and it supports locate queries in an optimal time with \\omega are two key functions for building indexes on RLBWT, and the best previous result computes LF and \\phi^{-1} in O(\\log \\log n) time with O} (r) words for the number r of runs in the RLBWTof an input string. Following this line of research, we present the first compressed index on RLBWT, which we callr-index-f , that supports various queries including locate, count, extract queries, decompression and prefix search in the optimal time with smaller working space of O(r) words for small alphabets in this paper. We present efficient data structures for computing two important functions of LF and \\phi^{-1} in constant time with O(r) words of space , which is a bit step forward in computation time from the previous best result of O(\\log \\log n)time for string length n and O(r) words of space. Finally, We present algorithms for computing queries on RLBWT by leveraging those two data structures in optimal time with O(r) words of space.", "after_revision": "Indexing highly repetitive strings (i.e., strings with many repetitions) for fast queries has become a central research topic in string processing, because it has a wide variety of applications in bioinformatics and natural language processing. Although a substantial number of indexes for highly repetitive strings have been proposed thus far, developing compressed indexes that support various queries remains a challenge. The run-length Burrows-Wheeler transform (RLBWT) is a lossless data compression by a reversible permutation of an input string and run-length encoding, and it has %DIFDELCMD < ] %%% received interest for indexing highly repetitive strings. LF and \\phi^{-1 are two key functions for building indexes on RLBWT, and the best previous result computes LF and \\phi^{-1} in O(\\log \\log n) time with O} (r) words of space for the string length n and the number r of runs in RLBWT. In this paper, we improve LF and \\phi^{-1} so that they can be computed in a constant time with O(r) words of space . Subsequently, we present OptBWTR (optimal-time queries on BWT-runs compressed indexes), the first string index that supports various queries including locate, count, extract queries in optimal time and O(r) words of space.", "edit_actions": [{"type": "R", "before": "Although a significant number of compressed", "after": "Indexing highly repetitive strings (i.e., strings with many repetitions) for fast queries has become a central research topic in string processing, because it has a wide variety of applications in bioinformatics and natural language processing. Although a substantial number of", "start_char_pos": 0, "end_char_pos": 43}, {"type": "R", "before": "faster", "after": "various", "start_char_pos": 154, "end_char_pos": 160}, {"type": "R", "before": "Run-length", "after": "The run-length", "start_char_pos": 190, "end_char_pos": 200}, {"type": "D", "before": "become a popular research topic in string processing. R-index", "after": null, "start_char_pos": 349, "end_char_pos": 410}, {"type": "D", "before": "Gagie et al., ACM'20", "after": null, "start_char_pos": 411, "end_char_pos": 431}, {"type": "R", "before": "is an efficient compressed index on RLBWTwhose space usage depends not on string length but the number of runs in an RLBWT, and it supports locate queries in an optimal time with \\omega", "after": "received interest for indexing highly repetitive strings. LF and \\phi^{-1", "start_char_pos": 450, "end_char_pos": 635}, {"type": "R", "before": "for the", "after": "of space for the string length n and the", "start_char_pos": 785, "end_char_pos": 792}, {"type": "D", "before": "the RLBWTof an input string. Following this line of research, we present the first compressed index on RLBWT, which we call", "after": null, "start_char_pos": 813, "end_char_pos": 936}, {"type": "D", "before": "r-index-f", "after": null, "start_char_pos": 936, "end_char_pos": 945}, {"type": "R", "before": ", that supports various queries including locate, count, extract queries, decompression and prefix search in the optimal time with smaller working space of O(r) words for small alphabets in this paper. We present efficient data structures for computing two important functions of", "after": "RLBWT. In this paper, we improve", "start_char_pos": 946, "end_char_pos": 1225}, {"type": "R", "before": "in", "after": "so that they can be computed in a", "start_char_pos": 1243, "end_char_pos": 1245}, {"type": "R", "before": ", which is a bit step forward in computation time from the previous best result of O(\\log \\log n)time for string length n and O(r) words of space. Finally, We present algorithms for computing queries on RLBWT by leveraging those two data structures", "after": ". Subsequently, we present OptBWTR (optimal-time queries on BWT-runs compressed indexes), the first string index that supports various queries including locate, count, extract queries", "start_char_pos": 1285, "end_char_pos": 1533}, {"type": "R", "before": "with", "after": "and", "start_char_pos": 1550, "end_char_pos": 1554}], "sents_char_pos": [0, 189, 402, 841, 1147, 1431]} {"doc_id": "2006.05509", "revision_depth": "1", "before_revision": "Powered by artificial intelligence (AI) , particularly deep neural networks, computer aided detection (CAD) tools can be trained to recognize TB-related abnormalities on chest radiographs , thereby screening large numbers of people and reducing the pressure on healthcare professionals. Addressing the lack of studies comparing the performance of different products, we evaluated five AI software platforms specific to TB : CAD4TB ( v6 ), InferReadDR (v2), Lunit INSIGHT for Chest Radiography (v4.9.0), JF CXR-1 (v2) by and qXR (v3) by on an unseen dataset of chest X-rays collected in three TB screening center in Dhaka, Bangladesh. The 23,566 individuals included in the study all received a CXR read by a group of three Bangladeshi board-certified radiologists. A sample of CXRs were re-read by US board-certified radiologists. Xpert was used as the reference standard. All five AI platforms significantly outperformed the human readers . The areas under the receiver operating characteristic curves are qXR: 0.91 (95\\% CI: 0.90-0.91), Lunit INSIGHT CXR: 0.89 (95\\% CI: 0.88-0.89), InferReadDR: 0.85 (95\\% CI: 0.84-0.86), JF CXR-1: 0.85 (95\\% CI: 0.84-0.85) , CAD4TB: 0.82 (95\\% CI: 0.81-0.83). We also proposed a new analytical framework that evaluates a screening and triage test and informs threshold selection through tradeoff between cost efficiency and ability to triage. Further, we assessed the performance of the five AI algorithms across the subgroups of age , use cases, and prior TB history , and found that the threshold scores performed differently across different subgroups. The positive results of our evaluation indicate that these AI products can be useful screening and triage tools for active case finding in high TB-burden regions .", "after_revision": "Artificial intelligence (AI) products can be trained to recognize tuberculosis (TB)-related abnormalities on chest radiographs . Various AI products are available commercially, yet there is lack of evidence on how their performance compared with each other and with radiologists. We evaluated five AI software products for screening and triaging TB using a large dataset that had not been used to train any commercial AI products. Individuals (>=15 years old) presenting to three TB screening centers in Dhaka, Bangladesh, were recruited consecutively. All CXR were read independently by a group of three Bangladeshi registered radiologists and five commercial AI products : CAD4TB ( v7 ), InferReadDR (v2), Lunit INSIGHT CXR (v4.9.0), JF CXR-1 (v2) , and qXR (v3) . All five AI products significantly outperformed the Bangladeshi radiologists . The areas under the receiver operating characteristic curve are qXR: 90.81\\% (95\\% CI: 90.33-91.29\\%), CAD4TB: 90.34\\% (95\\% CI: 89.81-90.87), Lunit INSIGHT CXR: 88.61\\% (95\\% CI: 88.03\\%-89.20\\%), InferReadDR: 84.90\\% (95\\% CI: 84.27-85.54\\%) and JF CXR-1: 84.89\\% (95\\% CI: 84.26-85.53\\%). Only qXR met the TPP with 74.3\\% specificity at 90\\% sensitivity. Five AI algorithms can reduce the number of Xpert tests required by 50\\%, while maintaining a sensitivity above 90\\%. All AI algorithms performed worse among the older age and people with prior TB history . AI products can be highly accurate and useful screening and triage tools for TB detection in high burden regions and outperform human readers .", "edit_actions": [{"type": "R", "before": "Powered by artificial", "after": "Artificial", "start_char_pos": 0, "end_char_pos": 21}, {"type": "R", "before": ", particularly deep neural networks, computer aided detection (CAD) tools", "after": "products", "start_char_pos": 40, "end_char_pos": 113}, {"type": "R", "before": "TB-related", "after": "tuberculosis (TB)-related", "start_char_pos": 142, "end_char_pos": 152}, {"type": "R", "before": ", thereby screening large numbers of people and reducing the pressure on healthcare professionals. Addressing the lack of studies comparing the performance of different products, we", "after": ". Various AI products are available commercially, yet there is lack of evidence on how their performance compared with each other and with radiologists. We", "start_char_pos": 188, "end_char_pos": 369}, {"type": "R", "before": "platforms specific to TB", "after": "products for screening and triaging TB using a large dataset that had not been used to train any commercial AI products. Individuals (>=15 years old) presenting to three TB screening centers in Dhaka, Bangladesh, were recruited consecutively. All CXR were read independently by a group of three Bangladeshi registered radiologists and five commercial AI products", "start_char_pos": 397, "end_char_pos": 421}, {"type": "R", "before": "v6", "after": "v7", "start_char_pos": 433, "end_char_pos": 435}, {"type": "R", "before": "for Chest Radiography", "after": "CXR", "start_char_pos": 471, "end_char_pos": 492}, {"type": "R", "before": "by", "after": ",", "start_char_pos": 517, "end_char_pos": 519}, {"type": "R", "before": "by on an unseen dataset of chest X-rays collected in three TB screening center in Dhaka, Bangladesh. The 23,566 individuals included in the study all received a CXR read by a group of three Bangladeshi board-certified radiologists. A sample of CXRs were re-read by US board-certified radiologists. Xpert was used as the reference standard.", "after": ".", "start_char_pos": 533, "end_char_pos": 872}, {"type": "R", "before": "platforms", "after": "products", "start_char_pos": 885, "end_char_pos": 894}, {"type": "R", "before": "human readers", "after": "Bangladeshi radiologists", "start_char_pos": 926, "end_char_pos": 939}, {"type": "R", "before": "curves", "after": "curve", "start_char_pos": 996, "end_char_pos": 1002}, {"type": "R", "before": "0.91", "after": "90.81\\%", "start_char_pos": 1012, "end_char_pos": 1016}, {"type": "R", "before": "0.90-0.91), Lunit INSIGHT CXR: 0.89", "after": "90.33-91.29\\%), CAD4TB: 90.34\\%", "start_char_pos": 1027, "end_char_pos": 1062}, {"type": "R", "before": "0.88-0.89), InferReadDR: 0.85", "after": "89.81-90.87), Lunit INSIGHT CXR: 88.61\\%", "start_char_pos": 1073, "end_char_pos": 1102}, {"type": "R", "before": "0.84-0.86), JF CXR-1: 0.85", "after": "88.03\\%-89.20\\%), InferReadDR: 84.90\\%", "start_char_pos": 1113, "end_char_pos": 1139}, {"type": "R", "before": "0.84-0.85) , CAD4TB: 0.82", "after": "84.27-85.54\\%) and JF CXR-1: 84.89\\%", "start_char_pos": 1150, "end_char_pos": 1175}, {"type": "R", "before": "0.81-0.83). We also proposed a new analytical framework that evaluates a screening and triage test and informs threshold selection through tradeoff between cost efficiency and ability to triage. Further, we assessed the performance of the five AI algorithms across the subgroups of age , use cases, and", "after": "84.26-85.53\\%). Only qXR met the TPP with 74.3\\% specificity at 90\\% sensitivity. Five AI algorithms can reduce the number of Xpert tests required by 50\\%, while maintaining a sensitivity above 90\\%. All AI algorithms performed worse among the older age and people with", "start_char_pos": 1186, "end_char_pos": 1488}, {"type": "R", "before": ", and found that the threshold scores performed differently across different subgroups. The positive results of our evaluation indicate that these", "after": ".", "start_char_pos": 1506, "end_char_pos": 1652}, {"type": "A", "before": null, "after": "highly accurate and", "start_char_pos": 1672, "end_char_pos": 1672}, {"type": "R", "before": "active case finding in high TB-burden regions", "after": "TB detection in high burden regions and outperform human readers", "start_char_pos": 1711, "end_char_pos": 1756}], "sents_char_pos": [0, 286, 633, 764, 830, 872, 941, 1197, 1380, 1593]} {"doc_id": "2006.06987", "revision_depth": "2", "before_revision": "Quantitative mechanistic models based on reaction networks with stochastic chemical kinetics can help elucidate fundamental biological process where random fluctuations are relevant , such as in single cells. The dynamics of such models is described by the master equation , which provides the time course evolution of the probability distribution across the discrete state space consisting of vectors of population levels of the interacting biochemical species. Since solving the master equation exactly is very difficult in general due to the combinatorial explosion of the state space size , several analytical approximations have been proposed. The deterministic rate equation (DRE) offers a macroscopic view of the system by means of a system of differential equations that estimate the average populations for each species, but it may be inaccurate in the case of nonlinear interactions such as in mass-action kinetics . Here we propose finite state expansion (FSE), an analytical method that mediates between the microscopic and the macroscopic interpretations of a chemical reaction network by coupling the master equation dynamics of a chosen subset of the discrete state space with the population dynamics of the DRE. This is done via an algorithmic translation of a chemical reaction network into a target expanded one where each discrete state is represented as a further distinct chemical species. The translation produces a network with stochastically equivalent dynamics, but the DRE of the expanded network can be interpreted as a correction to the original ones. Through a publicly available software implementation of FSE , we demonstrate its effectiveness in models from systems biology which challenge state-of-the-art techniques due to the presence of intrinsic noise, multi-scale population dynamics , and multi-stability.", "after_revision": "Stochastic reaction networks are a fundamental model to describe interactions between species where random fluctuations are relevant . The master equation provides the evolution of the probability distribution across the discrete state space consisting of vectors of population counts for each species. However, since its exact solution is often elusive , several analytical approximations have been proposed. The deterministic rate equation (DRE) gives a macroscopic approximation as a compact system of differential equations that estimate the average populations for each species, but it may be inaccurate in the case of nonlinear interaction dynamics . Here we propose finite state expansion (FSE), an analytical method mediating between the microscopic and the macroscopic interpretations of a stochastic reaction network by coupling the master equation dynamics of a chosen subset of the discrete state space with the mean population dynamics of the DRE. An algorithm translates a network into an expanded one where each discrete state is represented as a further distinct species. This translation exactly preserves the stochastic dynamics, but the DRE of the expanded network can be interpreted as a correction to the original one. The effectiveness of FSE is demonstrated in models that challenge state-of-the-art techniques due to intrinsic noise, multi-scale populations , and multi-stability.", "edit_actions": [{"type": "R", "before": "Quantitative mechanistic models based on reaction networks with stochastic chemical kinetics can help elucidate fundamental biological process", "after": "Stochastic reaction networks are a fundamental model to describe interactions between species", "start_char_pos": 0, "end_char_pos": 142}, {"type": "R", "before": ", such as in single cells. The dynamics of such models is described by the master equation , which provides the time course", "after": ". The master equation provides the", "start_char_pos": 182, "end_char_pos": 305}, {"type": "R", "before": "levels of the interacting biochemical species. Since solving the master equation exactly is very difficult in general due to the combinatorial explosion of the state space size", "after": "counts for each species. However, since its exact solution is often elusive", "start_char_pos": 416, "end_char_pos": 592}, {"type": "R", "before": "offers a macroscopic view of the system by means of a", "after": "gives a macroscopic approximation as a compact", "start_char_pos": 687, "end_char_pos": 740}, {"type": "R", "before": "interactions such as in mass-action kinetics", "after": "interaction dynamics", "start_char_pos": 880, "end_char_pos": 924}, {"type": "R", "before": "that mediates", "after": "mediating", "start_char_pos": 994, "end_char_pos": 1007}, {"type": "R", "before": "chemical", "after": "stochastic", "start_char_pos": 1073, "end_char_pos": 1081}, {"type": "A", "before": null, "after": "mean", "start_char_pos": 1196, "end_char_pos": 1196}, {"type": "R", "before": "This is done via an algorithmic translation of a chemical reaction network into a target", "after": "An algorithm translates a network into an", "start_char_pos": 1229, "end_char_pos": 1317}, {"type": "R", "before": "chemical species. The translation produces a network with stochastically equivalent", "after": "species. This translation exactly preserves the stochastic", "start_char_pos": 1394, "end_char_pos": 1477}, {"type": "R", "before": "ones. Through a publicly available software implementation of FSE , we demonstrate its effectiveness in models from systems biology which", "after": "one. The effectiveness of FSE is demonstrated in models that", "start_char_pos": 1575, "end_char_pos": 1712}, {"type": "D", "before": "the presence of", "after": null, "start_char_pos": 1758, "end_char_pos": 1773}, {"type": "R", "before": "population dynamics", "after": "populations", "start_char_pos": 1803, "end_char_pos": 1822}], "sents_char_pos": [0, 208, 462, 648, 926, 1228, 1411, 1580]} {"doc_id": "2006.07209", "revision_depth": "1", "before_revision": " This study attempts to assess the effectiveness of nonpharmaceutical interventions towards SARS-CoV-2 infections in Germany. Using dates of infection estimated from official case data, exponential growth models for infections and reproduction numbers were estimated and investigated with respect to change points . Clear evidence is found of a decline of infections at the beginning of March, which ] ] ] ] ] can be attributed to relatively small interventions and voluntary behavioral changes. The effects of later interventions remain unclear . Liberalizations of measures did not induce a re-increase of infections. These results contradict previous studies on the German case. The study also exhibits three methodological challenges with respect to assessing of interventions : a) the estimation of true infection dates , b) the usage of several indicators and c) the influence of test volume . In conclusion, the effectiveness of most German interventions remains questionable .", "after_revision": "Aims: Nonpharmaceutical interventions against the spread of SARS-CoV-2 in Germany included the cancellation of mass events (from March 8), closures of schools and child day care facilities (from March 16) as well as a \"lockdown\" (from March 23). This study attempts to assess the effectiveness of these interventions in terms of revealing their impact on infections over time. Methods: Dates of infections were estimated from official German case data by incorporating the incubation period and an empirical reporting delay. Exponential growth models for infections and reproduction numbers were estimated and investigated with respect to change points in the time series. Results: A significant decline of daily and cumulative infections as well as reproduction numbers is found at March 8 (CI 7, 9]), March 10 (CI 9, 11] and March 3 (CI 2, 4]), respectively. Further declines and stabilizations are found in the end of March. There is also a change point in new infections at April 19 (CI 18, 20]), but daily infections still show a negative growth. From March 19 (CI 18, 20]), the reproduction numbers fluctuate on a level below one. Conclusions: The decline of infections in early March 2020 can be attributed to relatively small interventions and voluntary behavioural changes. Additional effects of later interventions cannot be detected clearly . Liberalizations of measures did not induce a re-increase of infections. Thus, the effectiveness of most German interventions remains questionable. Moreover, assessing of interventions is impeded by the estimation of true infection dates and the influence of test volume .", "edit_actions": [{"type": "A", "before": null, "after": "Aims: Nonpharmaceutical interventions against the spread of SARS-CoV-2 in Germany included the cancellation of mass events (from March 8), closures of schools and child day care facilities (from March 16) as well as a \"lockdown\" (from March 23).", "start_char_pos": 0, "end_char_pos": 0}, {"type": "R", "before": "nonpharmaceutical interventions towards SARS-CoV-2 infections in Germany. Using dates of infection", "after": "these interventions in terms of revealing their impact on infections over time. Methods: Dates of infections were", "start_char_pos": 52, "end_char_pos": 150}, {"type": "R", "before": "case data, exponential", "after": "German case data by incorporating the incubation period and an empirical reporting delay. Exponential", "start_char_pos": 175, "end_char_pos": 197}, {"type": "R", "before": ". Clear evidence is found of a decline of infections at the beginning of March, which", "after": "in the time series. Results: A significant decline of daily and cumulative infections as well as reproduction numbers is found at March 8 (CI", "start_char_pos": 314, "end_char_pos": 399}, {"type": "A", "before": null, "after": "7, 9", "start_char_pos": 400, "end_char_pos": 400}, {"type": "A", "before": null, "after": "), March 10 (CI", "start_char_pos": 401, "end_char_pos": 401}, {"type": "A", "before": null, "after": "9, 11", "start_char_pos": 402, "end_char_pos": 402}, {"type": "A", "before": null, "after": "and March 3 (CI", "start_char_pos": 404, "end_char_pos": 404}, {"type": "A", "before": null, "after": "2, 4", "start_char_pos": 405, "end_char_pos": 405}, {"type": "A", "before": null, "after": "), respectively. Further declines and stabilizations are found in the end of March. There is also a change point in new infections at April 19 (CI", "start_char_pos": 406, "end_char_pos": 406}, {"type": "A", "before": null, "after": "18, 20", "start_char_pos": 407, "end_char_pos": 407}, {"type": "A", "before": null, "after": "), but daily infections still show a negative growth. From March 19 (CI", "start_char_pos": 408, "end_char_pos": 408}, {"type": "A", "before": null, "after": "18, 20", "start_char_pos": 409, "end_char_pos": 409}, {"type": "A", "before": null, "after": "), the reproduction numbers fluctuate on a level below one. Conclusions: The decline of infections in early March 2020", "start_char_pos": 410, "end_char_pos": 410}, {"type": "R", "before": "behavioral changes. The", "after": "behavioural changes. Additional", "start_char_pos": 477, "end_char_pos": 500}, {"type": "R", "before": "remain unclear", "after": "cannot be detected clearly", "start_char_pos": 532, "end_char_pos": 546}, {"type": "R", "before": "These results contradict previous studies on the German case. The study also exhibits three methodological challenges with respect to", "after": "Thus, the effectiveness of most German interventions remains questionable. Moreover,", "start_char_pos": 621, "end_char_pos": 754}, {"type": "R", "before": ": a)", "after": "is impeded by", "start_char_pos": 782, "end_char_pos": 786}, {"type": "R", "before": ", b) the usage of several indicators and c)", "after": "and", "start_char_pos": 826, "end_char_pos": 869}, {"type": "D", "before": ". In conclusion, the effectiveness of most German interventions remains questionable", "after": null, "start_char_pos": 899, "end_char_pos": 983}], "sents_char_pos": [0, 125, 496, 620, 682, 900]} {"doc_id": "2006.07223", "revision_depth": "1", "before_revision": "This paper studies an infinite horizon optimal consumption problem under exponential utility, together with non-negativity constraint on consumption rate and a reference point to the past consumption peak . The performance is measured by the distance between the consumption rate and a fraction 0\\leq\\lambda\\leq 1 of the historical consumption maximum. To overcome its path-dependent nature, the consumption running maximum process is chosen as an auxiliary state process that renders the value function two dimensional depending on the wealth variable x and the reference variable h. The associated Hamilton-Jacobi-Bellman (HJB) equation is expressed in the piecewise manner across different regions to take into account constraints. By employing the dual transform and smooth-fit principle, the classical solution of the HJB equation is obtained in an analytical form, which in turn provides the feedback optimal investment and consumption . For 0<\\lambda<1, we are able to find four boundary curves x_1(h),%DIFDELCMD < \\breve{x}%%% (h), x_2(h) and x_3(h) for the wealth level x that are nonlinear functions of h such that the feedback optimal consumption satisfies: (i) c^*(x,h)=0 when x\\leq x_1(h); (ii) 0