text_with_holes
stringlengths 230
2.82k
| text_candidates
stringlengths 158
878
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
Cheridito et al. (2006) considered dynamic coherent, convex monetary and monetary risk measures for discrete-time processes modelling the evolution of financial values. <|MaskedSetence|> (2012) extended dynamic convex risk measures in Cheridito et al. to take the timing of cash flow into consideration. Sun and Hu (2018) introduced a new class of set-valued risk measures which satisfies cash sub-additivity and investigated dynamic set-valued cash sub-additive risk measures. Wang et al. (2021) constructed some general continuous-time equilibrium dynamic risk measures through using a adapted solution to a backward stochastic Volterra integral equation. <|MaskedSetence|> (2018) and Sun et al. (2018) extended convex risk measures to loss-based cases. More recent research on dynamic risk measures reference in Chen et al. <|MaskedSetence|> The volatility of cryptocurrencies is a distinctive characteristic defined by rapid and substantial price fluctuations within relatively short periods. Compared to traditional financial assets, cryptocurrencies such as Bitcoin and Ethereum are considered as asset which can be used for speculative purpose, hence it can lead to extreme volatility and bubbles (see Fry and Cheah, 2016). Factors contributing to this volatility include market sentiment, regulatory developments, technological advancements, and macroeconomic conditions. Apart from extreme volatility of cryptocurrencies, different orders of risk data are mixed in a short period of time can occur when multiple levels or types of risk factors are simultaneously influencing the financial market. For example, major economic events, such as financial crises, geopolitical tensions, or central bank policy announcements, can trigger rapid and varied responses across different asset classes. This simultaneous impact can result in mixed risk signals. In addition, sudden and unexpected shocks to the market, whether related to economic indicators, corporate news, or global events, can lead to a convergence of various risk factors that creating a mixed picture of risk data where Yang et al. (2023) found that an increase in economic policy uncertainty in the China and US exacerbates fluctuation in the global oil price, particularly during times of crisis. Besides, high correlations between different asset classes or markets can lead to a synchronization of risk data. For example, during periods of heightened risk aversion, equities, currencies, and commodities may all exhibit increased volatility simultaneously. Therefore, the need for comprehensive risk measures that can capture the complexity and increasing fluctuation of market volatility is significant not only for new financial assets but also for traditional financial market with rapidly changing financial environment and global landscape.
. | **A**: Acciaio et al.
**B**: (2021), Chen and Feinstein (2022), Mastrogiacomo and Rosazza (2022), Yoshioka and Yoshioka (2024)
Nowadays, as the digital economy and cryptocurrencies develop rapidly, they have a great impact on the financial market.
**C**: Chen et al.
| ACB | ACB | ABC | ACB | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> (1999); Boyle and Potapchik (2008). However, as pointed out in Fu et al. <|MaskedSetence|> We expect our analysis to help overcome the numerical inefficiency in the short-maturity regime.
. | **A**: Numerical analysis of the Asian option was conducted in Geman and Yor (1993); Linetsky (2004); Broadie et al.
**B**:
Our study is of practical interest because existing numerical methods have proven to be less efficient in the case of short maturity or low volatility.
**C**: (1999); Vecer (2002), such methods are either problematic in the short-maturity regime or computationally expensive.
| BAC | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> FDA is a niche yet established area in the statistical literature, with many applied and methodological publications in all domains of knowledge, including spatial and space-time FDA [7, 16, 13, 19, 19, 12], coastal engineering [21], environmental studies [3, 18], transportation science [27] and epidemiology [32].
Methodologies for GSA that are able to deal with functional outputs are present in the literature: [14] propose non-time-varying sensitivity indices for models with functional outputs, based on a PCA expansion of the data. This approach is thus not capable of detecting the presence of time variations in impacts, nor does it address the issue of statistical significance of impacts. <|MaskedSetence|> [9] instead use a bayesian framework, based on adaptive splines to extract also in this case non-time-varying indices. In all the cited works around GSA techniques for functional outputs uncertainty is not explicitly explored. <|MaskedSetence|> | **A**: [11] proposes a similar approach, without specifying a fixed functional basis, and proposing an innovative functional pick-and-freeze method for estimation.
**B**: A very sound framework for the GSA of stochastic models with scalar outputs is provided in [2]..
**C**:
A very natural framework to tackle this specific issue is Functional Data Analysis (FDA) [29], the branch of statistics that deals with studying data points that come in the shape of continuous functions over some kind of domain.
| CAB | CAB | CAB | CBA | Selection 1 |
<|MaskedSetence|> Instead, the set of all models which are consistent with the prices of observed vanilla options was investigated and bounds on the prices of exotic derivatives were derived. The approach was applied to barrier options in [13], to forward start options in [38], to variance options in [17], to weighted variance swaps in [24], among others. [23] introduced the concept of model independent
arbitrage and characterized three different situations that a set of option prices would fall into: absence of arbitrage, model-independent arbitrage, or weak forms of model-dependent arbitrage. <|MaskedSetence|> In discrete time, [28] proved a duality result for a class of continuous payoffs in a specific topological setup. <|MaskedSetence|> Pathwise versions of FTAP were given in [60] for a one-period market model and in [1] for a continuous time model where a superlinearly growing option is traded. In discrete time markets, [15], [14] proved versions of FTAP by investigating different notions of arbitrage and using different sets of admissible scenarios. [16] proved a superhedging
duality theorem, characterized. | **A**: Using the theory of Monge–Kantorovich mass transport, [9] established superhedging dualities for exotic options.
**B**: A notion of weak arbitrage was discussed in [21] to deal with the case of infinitely many given options.
**C**:
The pathwise approach, pioneered by [36], makes no assumptions on the dynamics of the underlying assets.
| BCA | CBA | CBA | CBA | Selection 3 |
Our leading application of excludability is to preferences with single-crossing differences (SCD). Here we show that learning obtains when the information structure satisfies
directionally unbounded beliefs (DUB). <|MaskedSetence|> <|MaskedSetence|> Like SCD, DUB is formulated for a (totally) ordered state space. <|MaskedSetence|> | **A**: SCD is a familiar property (Milgrom and
Shannon, 1994) that is widely assumed in economics: it captures settings in which there are no preference reversals as the state increases.
**B**: It requires that for any state ω𝜔\omegaitalic_ω and any prior that puts positive probability on ω𝜔\omegaitalic_ω, there exist both: (i)
signals that make one arbitrarily certain that the state is at least ω𝜔\omegaitalic_ω; and (ii).
**C**: By contrast, DUB appears to be a new condition on information structures, although Milgrom (1979) utilizes a related property in the context of auction theory.
| BAC | ACB | ACB | ACB | Selection 4 |
To illustrate the quantitative implications of the model we apply it to our running example, regulatory approval by the FDA. Applying the formulae implied by the model to published data on the cost structure of clinical trials, we calculate adjusted critical values that are neither as liberal as unadjusted testing, nor as conservative as those implied by some of the procedures in current use. <|MaskedSetence|> But we expect two principles to be robust more broadly. <|MaskedSetence|> rewards net of costs, that matter. <|MaskedSetence|> | **A**: And second, different kinds of multiplicity may call for different solutions depending on how they map to decision-making..
**B**: First, costs must matter in any model that justifies MHT as a way of “getting researcher incentives right.” If incentives matter, then it must be the net incentives, i.e.
**C**: We also explore potential applicability to research in economics, where the use of MHT adjustment is on the rise (see Figure 3), using a unique dataset on the costs of projects submitted to the Abdul Latif Jameel Poverty Action Lab (J-PAL) which we assembled for this purpose.
Overall the main message is that the specific procedures that are optimal will vary depending on the details of the scientific communication process, something that is clear even within the range of possibilities we consider here.
| CBA | CBA | CAB | CBA | Selection 2 |
…[R]aising enough sheep to replace the yarn made with Britain’s New World cotton imports by would have required staggering quantities of land: almost 9,000,000 acres in 1815, using ratios from model farms, and over 23,000,000 acres in 1830. This final figure surpasses Britain’s total crop and pasture land combined. It also surpasses Anthony Wrigley’s estimate that matching the annual energy output of Britain’s coal industry circa 1815 would have required that the country magically receive 15,000,000 additional acres of forest. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> | **A**: (p.
**B**: 276)
Based on this calculation, I set the land supply Z𝑍Zitalic_Z after the relief of land constraints to
.
**C**: If we add cotton, sugar, and timber circa 1830, we have somewhere between 25,000,000 and 30,000,000 ghost acres, exceeding even the contribution of coal by a healthy margin.
| CAB | CAB | CAB | CAB | Selection 3 |
<|MaskedSetence|> The dynamics of these variables can be found in Figures 1(a) and 1(b) along with corresponding estimation results in columns (1) and (2) in Table 1. We find that the treatment substantially increases both contributions and linking. Contributions still show a tendency to decrease over time, and the rate of this decay is not significantly impacted by the treatment. <|MaskedSetence|> <|MaskedSetence|> | **A**: A natural question is how contributions and average degree (number of outgoing links) are impacted by the information treatment.
**B**: Links, on the other hand, exhibit an additional differential dynamic.
**C**: Although there is a strong initial boost from the treatment intervention, decay in the number of links actually appears to accelerate mildly relative to groups in the baseline sessions.
Result 1..
| ABC | ABC | ABC | ACB | Selection 1 |
<|MaskedSetence|> A naïve combination of convex risk measures and discounted total costs, however, lacks time consistency, hindering the derivation of a corresponding DPP. Roughly speaking, time consistency refers to the property that smaller scores in future epochs guarantee a smaller score in the current epoch. We refer to [11] for a survey on various definitions of time consistency. <|MaskedSetence|> <|MaskedSetence|> | **A**: There is a stream of literature (see, e.g., [29, 54, 52, 55, 33, 50, 22, 5]) that studies time consistency from multiple angles and/or attempts to integrate convex risk measures and their variations into MDPs.
**B**:
One popular criterion is based on convex risk measures [4, 27, 37].
**C**: While here we are not concerned with model uncertainty, we would like to point out [10] and the references therein for a framework that handles model uncertainty in MDPs.
.
| CBA | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> To this purpose, we represent the determination of replenishment order quantities as solutions of a dynamic stochastic period-review inventory model with lost sales and an expected-cost objective function. In real-world applications, the probability distributions of the inventory level at the beginning of a period and its marginals, such as distributions of demand, spoilage, and supply shortage, are typically unknown and hence need to be estimated. The periodically updated estimates for these distributions form the states in the sequential decision process; the inventory model plays the role of a decision model (see Figure 3). The analysis of data provided by the business partner was carried out in previous studies using descriptive and predictive methods (Ulrich et al.,, 2021, 2022); the findings are applied in the numerical analyses of this paper. The literature stresses the difficulty of finding an optimal replenishment policy for decision models like the one discussed here. We therefore propose a stochastic lookahead policy that allows us to integrate probabilistic forecasts for the underlying probability distributions into the optimisation process in a dynamic multi-period framework. <|MaskedSetence|> In addition, the framework enables us to gain insights into the value of probabilistic information in our environment, not least in order to find some guidance for designing an adequate decision model. Finally, we show that such a framework is applicable to a real-world business environment of e-grocery retailing, potentially to the benefit of the retailer.. | **A**: We aim at investigating to what extent this approach allows a retailer to improve the inventory management process when faced with multiple sources of non-stationary uncertainty, namely stochastic customer demand, shelf lives, and supply shortages, a lead time of multiple days and demand that is lost if not served.
**B**:
6 Conclusion
In this paper, we propose a stochastic lookahead policy embedded in a data-driven sequential decision process for determining replenishment order quantities in e-grocery retailing.
**C**: We thereby demonstrate the feasibility of the integration of the different components of the data-driven sequential decision process (analytics and statistics, modelling and optimisation).
| BAC | ABC | BAC | BAC | Selection 1 |
The results of estimating equation 1 are shown in Table 4. <|MaskedSetence|> However, when we take into account the potential endogeneity problem, we find a positive and statistically significant effect of external debt on GHG emissions (column 2). <|MaskedSetence|> in external debt causes, on average, a 0.5% increase GHG emissions, other variables remain constant. These results suggest the presence of omitted variable bias in the FE estimates, i.e., there is at least one omitted factor that is negatively correlated with GHG emissions but positively with external debt (or vice versa).777Note that the FE estimates are the same if we include GDP growth and net capital inflows as in the IV estimates. These results are available upon request. <|MaskedSetence|> As expected, we find that a negative shock on international liquidity is negatively associated with the ratio of external debt over GDP. In other words, countries borrow less facing an increase in the global cost of liquidity either because it is more difficult for them to roll-over existing debt or because it is more expensive to issue new debt. At the bottom of the table, we report statistical tests to evaluate the relevance of the instruments. In particular, we show the first stage F-statistic, the test for under-identification, and the Anderson-Rubin Wald test for weak identification. We reject the null of under-identification and find that the coefficient associated with the external debt is statistically significant in the presence of weak identification. The F-statistics is well above 10.. | **A**: Indeed, the endogeneity issue may be behind previous findings of the non-significant effect of external debt on GHG emissions (Akam et al., , 2021).
First stage results are presented in the third column of Table 4.
**B**: In particular, a rise of 1 pp.
**C**: When using the fixed effects (FE) estimator, we find a not significant effect of external debt on GHG emissions.
| CBA | CAB | CBA | CBA | Selection 4 |
Again, we complete this preliminary verification with the signature-based validation test. <|MaskedSetence|> The obtained p𝑝pitalic_p-values (see Table 5(b)) show that the GRM model is rejected at any standard level while the RSAR(1) process is not. <|MaskedSetence|> Moreover, this result is in line with the empirical observation that the inflation dynamics exhibits roughly two regimes in the inflation data set (see Figure 14): a regime of low inflation (e.g. between 1982 and 2021) and a regime of high inflation (e.g. <|MaskedSetence|> | **A**: This is particularly interesting given that the p𝑝pitalic_p-values of the two-sample Kolmogorov-Smirnov test reported in Table 4(b) are very close.
**B**: We implement the same steps described in the previous section (here, we obtain m=72𝑚72m=72italic_m = 72 historical paths) except that we work with the log-signature which lead to higher statistical powers on synthetic data.
**C**: between 1972 and 1982).
.
| BAC | BAC | BAC | BAC | Selection 4 |
When using the AI bot GPT-3 (Brown et al. (2020)), I generate a distribution of responses by exploiting the probabilities produced by the model, as explained in Sect. 3.4.1. A single response from GPT-3 aggregates information from multiple sources and produces output words by sampling words according to the softmax probability produced by the Transformer (Vaswani et al. (2017)). This allows me to study the variability in response to the same prompt with a single run of the experiment. After I completed the experiment and described it in an earlier draft, I was made aware of other recent work using an AI bot in lieu of surveys (Argyle et al. <|MaskedSetence|> <|MaskedSetence|> (2022)) including, recently, biases in job descriptions (Borchers et al. (2022)). <|MaskedSetence|> In this project, I do not engage in analysis specific to GPT-3; however, I conduct pointed experiments to assess the stability of GPT-3’s responses with respect to grammatical, syntactic and semantically meaningful perturbations.
. | **A**: (2022); Aher et al.
**B**: (2022)) although not in the context of anchoring effects.
Variability in response to different prompts has allowed scientists to use GPT-3 to evaluate the biases of AI Bots (Aher et al.
**C**: Others have looked at the effect of gendered speech (Lucy and Bamman (2021)) and more general issues in cognitive psychology (Binz and Schulz (2022)).
| ABC | ABC | ABC | ACB | Selection 2 |
Under the assumption of homogeneous probabilities of success, we estimate the average cost of developing a drug to be $58.51 million. This high cost suggests that drug development is risky compared to the expected value of $63.37 million at the discovery stage. <|MaskedSetence|> Breaking down the clinical trial costs by phase, we find that the expected cost of Phase I clinical trials is $0.62 million, which increases significantly to $30.48 million and $41.09 million for Phases II and III, respectively. These estimates suggest escalating costs with each successive phase.
Our cost estimates allow us to assess the average cost of bringing new drugs to the market and inform regulations such as price interventions (U.S. <|MaskedSetence|> <|MaskedSetence|> The best-known cost estimates either rely on confidential surveys and a sample of a few firms (DiMasi et al., 2016) or estimate the accounting cost of a single trial (Sertkaya et al., 2016). Our work complements this literature by providing economic, risk-adjusted cost estimates based on a representative sample of drugs, leveraging the valuation approach we developed.. | **A**: Furthermore, by leveraging discontinuation announcements at various stages, something we did not need to use until now, we estimate the expected cost of clinical trials, where the expectation is taken at the discovery stage, to be approximately $12.43 million.
**B**: Also, see Dubois and Kyle (2016) for estimates of the costs per statistical life saved from cancer drugs.
**C**: House of Representatives, 2021).222For more on the topic from the Congressional Budget Office, see here and here.
| CAB | ACB | ACB | ACB | Selection 4 |
For general probability distributions, loading of its discretized probability density function (PDFs) remains one of the main problems in quantum computing. <|MaskedSetence|> Recently, there has been new approaches to the quantum state preparation problem in the literature, that are not related to the qGAN or VQE methods reviewed above. <|MaskedSetence|> In [pracht2023pricing], the author proposed a quantum binomial tree algorithm to approximate the option prices in a discrete time setting. <|MaskedSetence|> However, to the best of our knowledge, there seems not to be any result in the literature that provides rigorous upper bounds on the quantum circuit complexities and as well as convergence for general multi-variate distributions.
. | **A**: In [iaconis2023quantum], the authors considered the quantum state preparation problem for probability distribution with smooth differentiable density functions, such as the normal distribution, where they proposed an algorithm based on the matrix product state (MPS) approximation method, and provide an error analysis and numerical convergence for the single-variate normal distribution.
**B**: We refer the reader to [chang2023novel] for a similar random walk based algorithm, and to [de2023quantum] for a hybrid classical quantum approach based on deconvolution methods for the quantum state preparation problem.
**C**: In the quantum computing literature, this step is also referred to as quantum state preparation, and it is an important initialization step for many quantum algorithms for pricing options.
| CAB | CAB | CAB | BCA | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> In Section 3, we address the solvability of the dual PDE problem by verifying a separation form of the solution and the probabilistic representations, the homogenization of Neumann boundary conditions and the stochastic flow analysis. <|MaskedSetence|> It is also verified therein that the expected total capital injection is bounded. Finally, the proof of an auxiliary lemma is reported in Appendix A.
. | **A**: In Section 2, we introduce the auxiliary state processes with reflections and derive the associated HJB equation with two Neumann boundary conditions for the auxiliary stochastic control problem.
**B**: The verification theorem on the optimal feedback control is presented in Section 4 together with the technical proofs on the strength of stochastic flow analysis and estimations of the optimal control.
**C**:
The rest of the paper is organized as follows.
| CAB | CAB | CAB | CAB | Selection 2 |
The work faces several limitations. <|MaskedSetence|> Simplifying assumptions are necessary to assign production functions and estimate replaceability, detailed in [28].
Prices are not considered as we adapt a network measure for short-term shock propagation in supply chains. <|MaskedSetence|> <|MaskedSetence|> Further research is needed to understand the role of price dynamics in shock propagation in firm-level production networks, extending simulation horizons.
. | **A**: However, price dynamics becomes significant over longer time horizons, impacting shock propagation and network reconfiguration.
**B**: Conceptually, the framework does not replace other decarbonization modeling frameworks, like IAMs and integrating results obtained from this study into full-scale climate-economic models remains future work.
The empirical part relies on a unique dataset of an entire economy’s firm-level production network, but the dataset is incomplete with respect to imports, exports, and prices.
**C**: Neglecting prices can be somewhat justified for the analysis of short-term economic dynamics, as several studies on shock-propagation in supply chains demonstrated [19] [20] [20].
| BCA | BCA | ACB | BCA | Selection 2 |
Lastly, we present a sensitivity analysis for the carbon tax. Similarly to the case of the other input parameters, we fix the values of incentives and TEB budget to 0% and €10M, respectively, while assuming the GEB to take values from Table 2. Here, we have also omitted the cases in which GenCos have a €1M capacity expansion budget due to the similarities in optimal decision values with the €10M case. We consider the values for the carbon tax to lie within the interval given in Table 2.
Figures 9 and 10 present the output factors considering centralised and perfectly competitive market structures, respectively. <|MaskedSetence|> This value lies between 75 € / MWh and 100 € / MWh for the GenCos with €10M GEB, and between 0 € / MWh and 25 € / MWh when one or both GenCos posses €1B GEB. <|MaskedSetence|> <|MaskedSetence|> | **A**: Nevertheless, while serving its purpose regardless of the GEB value, the carbon tax causes a decrease in the total generation and, consequently, in the total welfare.
**B**: Both output factors only remain stable once GenCos cease nearly all the conventional generation (i.e., VRE share in the total generation mix is close to 100%)..
**C**: As one can see, the results present a threshold value for the tax, after which it becomes effective and influences the increase of the VRE share in the total mix.
| CAB | BCA | CAB | CAB | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Moreover, [14] considers two agents who interact strategically through their linear impact on the return of the risk free asset. Maximizing their terminal wealth under CRRA utility, he derives the unique constant pure-strategy Nash equilibrium. Risk-averse investors competing to maximize expected utility of terminal wealth have also been considered by [41].
In the following, we solve an n𝑛nitalic_n-agent portfolio problem with relative performance concerns where we allow the agents to jointly influence the asset dynamics, which is reasonable if n𝑛nitalic_n is large, and which has not been done before.. | **A**: [43], however, consider a continuous time financial market where the price impact - both temporary and permanent - results from the investment of n+1𝑛1n+1italic_n + 1 ’strategic players’.
**B**: Additionally, so-called market impact games, in which a finite number of large traders aims to minimize their liquidation/execution cost, have for example been considered by [40, 42, 37, 23].
**C**:
The majority of literature considers the case of a single large trader.
| CAB | ABC | CAB | CAB | Selection 3 |
<|MaskedSetence|> Our flow analysis provides a useful foundation for the analysis of global P flows in terms of phosphate rock, fertilizers and related goods before biomass production. As such, it allows to derive valuable information for the analysis of vulnerabilities in countries’ supply relationships, including food security. <|MaskedSetence|> <|MaskedSetence|> Another important aspect of this study is the general applicability of the approach to other raw material flows, such as sulfur, nitrogen or potassium.
. | **A**: For this the translation of nominal bilateral trade flows into material flows of P is an important step in terms of accuracy.
**B**: We provide the information on (a) the origin of P flows, (b) their destinations and approximate material composition and (c) the resulting complex system of dependencies in supply.
This work also provides the means to analyze possible inconsistencies in different data sets by comparing model-based estimates to different official data sources.
**C**: 4 Conclusions
We show that trade data can be used to approximate the flow of mineral resources in a meaningful way when combined with other data sources.
| CAB | CAB | CAB | BAC | Selection 1 |
<|MaskedSetence|> A RILA is a complex annuity with insurance properties that offer policyholders the flexibility to prioritise their growth opportunities while limiting potential losses. <|MaskedSetence|> <|MaskedSetence|> large-cap stocks, NASDAQ Composite for all American and foreign common stocks, and the MSCI EAFE for stocks from developed economies outside of North America.
In essence, if the reference index performs poorly, the credited loss is lessened by either a floor (i.e., a maximum loss percentage), a buffer (i.e., only index losses over a specific threshold are credited), or a downside participation rate (e.g., the loss credited to the account corresponds to only 50%percent\%% of index losses). As pointed out in [18], the insurer can benefit from the imbalance between the guarantee’s downside protection and its upside cap, the ability to invest RILA funds into other products (e.g., corporate bonds) and hence earn a credit risk premium and, in some cases, from using a price-based index not accounting for dividend payments.
. | **A**:
As a supplement to compulsory annuities, a new type of insurance product called Registered Index-Linked Annuity (RILA) has been introduced in the U.S., also known as ‘index-linked annuities’ or ‘structured variable annuities.’ Moenig [18] provided a timely first academic study on RILAs.
**B**: Typically, holders of a RILA can choose to tie their wealth account with the performance of a particular market index, such as the S&\&&P 500 for U.S.
**C**: In a generic RILA contract, the policyholder can choose from several index options, protection levels, and other available options to achieve their preferred profile of potential growth.
| ACB | ACB | ACB | BCA | Selection 2 |
The link between default and income is potentially complex, on the one hand income shocks can affect default, this is however not our object of interest. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> [2018]). As a consequence, a soft default may reduce employment opportunities, condition mobility, and the availability of essential services, for the individual and hence negatively influence her income.
Another possibility is that a soft default triggers a relocation to a zip code that is smaller or otherwise worse in terms of job opportunities. Our results in Appendix J seem to go in this direction, i.e. after a soft default individuals on average end up in a zip code with lower local income, fewer employees, fewer firms, and lower average wage.. | **A**: On the other hand, default episodes through their effects on credit availability can affect income generating opportunities in several respects.
**B**: For example, it is known that non-credit actors such as potential employers, landlords, insurance companies, and mobile phone providers also make ample use of such information.
**C**: Survey data show that, in the US, almost 50% of firms check the credit information of prospective employees (Bos et al.
| BAC | ABC | ABC | ABC | Selection 3 |
Table 5. <|MaskedSetence|> We present the mean and the standard deviation (SD) for all model parameters.
3- and 4-factor model: In the crisis period 2021-23, the Gaussian component of the 3-factor model is only responsible for the short term fluctuations (cf. Figure 12(A)), while the large long term deviations are modeled as a superposition of positive and negative jumps (cf. <|MaskedSetence|> <|MaskedSetence|> | **A**: Posterior properties of the model parameters in the 2021-23 time period.
**B**: These observations for the sample paths of the underlying processes are in line with the estimated model parameters we obtain in Table 5, where we have high values for both positive and negative jump sizes..
**C**: Figure 12(B)).
| ACB | ABC | ACB | ACB | Selection 4 |
<|MaskedSetence|> As such, the meta-learner will quickly remember task-specific information and perform well on a similar query set. <|MaskedSetence|> LLF (You et al., 2021) studies MAML in an offline setting, proving that a predictor optimized by MAML can generalize well against concept drifts. However, the query sets are unlabeled in online settings, and one can only retrain the meta-learner after detecting a shift (Caccia et al., 2020). <|MaskedSetence|> Some methods (Zhang et al., 2020; You et al., 2022) combine incremental learning and meta-learning for recommender systems and dynamic graphs but ignore distribution shifts.
SML (Zhang et al., 2020) focuses on model adaptation and proposes a transfer network to convert model parameters on incremental data, which is orthogonal to our work.
. | **A**: Some works (Finn et al., 2019; Nagabandi et al., 2019; He et al., 2020) extend MAML to online settings on the assumption that the support set and the corresponding query set come from the same context, i.e., following the same distribution.
**B**: Consequently, the predictions are still susceptible to distribution shifts.
**C**: However, this assumption cannot hold when discrepancies between the two sets are non-negligible.
| ACB | ACB | ACB | BAC | Selection 3 |
<|MaskedSetence|> The first dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. This is shown in Columns 1-4. The second dependent variable is the 12-month industry adjusted return, which is computed from three months after the IPO (Columns 5-8). The industry classification used is the Fama-French 48-industry classification. <|MaskedSetence|> ∗ p<0.10𝑝0.10p<0.10italic_p < 0.10, ∗∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01. <|MaskedSetence|> | **A**: Continuous variables winsorized at the 1% and 99% level to mitigate the impact of outliers.
.
**B**: •
Notes: This table presents the correlation between investor emotions, information content and two stylized facts regarding initial public offering (IPO) returns.
**C**: Robust standard errors are reported in parentheses.
| BCA | BCA | BCA | BCA | Selection 3 |
<|MaskedSetence|> For these flagged customers, the bank can deploy a representative to intervene and better understand their needs. However, resource limitations make it necessary to flag a relatively small number of customers with high confidence. <|MaskedSetence|> In terms of the precision-recall trade-off, our model should be tuned to provide the highest possible precision for low recall values. Despite this simplification to a classification problem, the primary business KPI is the amount of withdrawal money correctly captured by the model, as discussed more in Sec. <|MaskedSetence|> | **A**: 3.4.
.
**B**: The focus of this exploration was to reduce false positives in the flagged customers to increase the efficiency of bank interventions.
**C**:
With the end goal of preventing churn, the model works by flagging customers with the highest risk of potential churn.
| BCA | CBA | CBA | CBA | Selection 4 |
<|MaskedSetence|> 2 with the time series of RV from 1970 to 2021, including expanded views of the aforementioned periods of market upheavals. In Sec. 3 we give analytical expressions of the two distribution functions used to fit the entire RV distribution: modified Generalized Beta (mGB), which is discussed in great detail in a companion paper liu2023rethinking , and Generalized Beta Prime (GB2), which is essentially a limiting case of mGB and is chosen because it has power-law tails. mGB is chosen because it exhibits long stretch of power-law dependence before dropping off and terminating at a finite value of the variable, thus mimicking the nDK behavior of RV liu2023rethinking . Additionally, both mGB and GB2 emerge as steady-state distributions of a stochastic differential equation for stochastic volatility liu2023rethinking . <|MaskedSetence|> 4 we describe fits of RV with mGB and GB2 and give a detailed description of the tails, specifically in regards to possible DK/nDK. <|MaskedSetence|> For all three fits, we provide confidence intervals janczura2012black and, more importantly, the results of a U-test pisarenko2012robust , which evaluates a p𝑝pitalic_p-value for the null hypothesis that a data point comes from a fitting distribution pisarenko2012robust . Sec. 5 is a discussion of results obtained in Sec. 4.
. | **A**: In Sec.
**B**:
To gain further insight into this phenomenon, we start in Sec.
**C**: Towards this end we also use a linear fit (LF) of the tails.
| ACB | BAC | BAC | BAC | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> We hypothesize that this deviation from theoretical prediction is due to a lack of learning amongst low-valuation types: players with low valuation win rarely, so the feedback they receive is coarse on most periods and hence insufficient to converge to bidding one’s value.131313We observe the same pattern of deviations from truthful bidding for low types in the second-price auction studied in §4.1. Figure 8 analyzes the
high-traffic query. <|MaskedSetence|> As observed, even with a large number of bidders,
inferred values converge in a few iterations.141414We used a realistic pricing function, but for confidentiality reasons we are unable to provide details.. | **A**:
As anticipated, we observe bid shading in the first price auction, as well as within the lower to middle quantiles in the case of the soft-floor.
**B**: We increased the length of the simulation to T=800,000𝑇800000T=800,000italic_T = 800 , 000 periods.
**C**: The second price auction also displays bid shading at lower quantiles.
| ACB | ACB | ACB | BCA | Selection 3 |
In this model, individuals have to make decisions sequentially, without knowing their position in the sequence (position uncertainty), but are aware of the decisions of some of their predecessors by observing a sample of past play. <|MaskedSetence|> Nevertheless, if the agents are unaware of their position in the sequence, they would condition their choice on the average payoff, from all potential positions, and they would be inclined to contribute so as to induce the potential successor to do so as well. G&M show that full contribution can occur in equilibrium, where given appropriate values of the parameters of the game (i.e. return from contributions), it is predicted that there exists an equilibrium where all agents contribute.
The main intuition behind the model is that an agent who observes a sample of past decisions of immediate predecessors, without defection, would decide to contribute, hoping to influence all her successors to do so. <|MaskedSetence|> As there is position uncertainty, the agent deals with a trade-off between inducing contributions from the remaining players in the sequence and the cost of contributing. In their main theoretical result, G&M show that incentives off the equilibrium path, largely depend on the sample size that the agents observe. <|MaskedSetence|> On the other hand, when the sample size is equal to one (only the decision of the previous player is observed), the agent can induce further contributions from the remaining agents in the sequence, by deciding to contribute. The model predicts that when the sample size is equal to one there is a mixed strategy equilibrium that can lead to full contribution. Finally, when the agents are aware of their position in the sequence, the model predicts that full contribution will unravel, as late agents in the sequence will have an incentive to free-ride.
. | **A**: If instead, she decides to defect, then all the successors are expected to defect as well.
**B**: In the presence of position certainty, those placed in the early positions of the sequence would want to contribute, in an effort to induce some of the other group members to co-operate (Rapoport and Erev, 1994), while late players, would want to free-ride on the contributions of the early players.
**C**: When an agent observes a sample of more than one previous actions that contains defection, there is no way this agent can prevent further defection by choosing to contribute.
| BAC | BAC | ABC | BAC | Selection 1 |
We conducted an extensive simulation to gauge the effectiveness of the multi-SSQW framework with daily return distributions of various stocks. Our results indicate that this approach successfully leverages the advantages of quantum computation within the financial arena.
A daily return distribution offers a statistical portrayal of a financial asset’s daily returns, such as shares or commodities. <|MaskedSetence|> This distribution illustrates the frequency of different return values, enabling investors to evaluate the associated risks and potential returns of an investment. Typically presented as a histogram, the x-axis represents the return percentage, while the y-axis indicates probability frequency. To create a daily return distribution, we initially collect real stock data from Yahoo Finance every day over a specified duration(2022-01 2022-03). We then categorize these daily returns into 16 distinct groups based on their respective percentages. These data are then transformed into a frequency distribution bar chart to provide a more visual and intuitive understanding of the returns distribution. <|MaskedSetence|> Our study involves simulating the daily return distribution over a quarter of various stocks or indices. <|MaskedSetence|> This allows us to analyze the error statistically, observe the convergence behavior, and document the computation time for each scenario. Subsequently, we introduce random parameters to execute optimization simulations. These simulations enable us to generate the resultant data and the status of error convergence, thereby validating the efficacy and reliability of our approach. This structured and systematic procedure helps to provide a comprehensive understanding of the daily return distribution for different financial assets.
. | **A**: It’s the day-to-day value change in percentage terms for the asset.
**B**: Afterward, we employ the multi-SSQW approach to simulate these outcomes, yielding a more realistic probability distribution of the market.
**C**: Initially, we performed simulations 100 times, experimenting with num(the number of walkers) within a range from 1 to 10 and step=1step1\textbf{step}=1step = 1.
| ABC | BCA | ABC | ABC | Selection 3 |
<|MaskedSetence|> 2020 and 17-38% in Holzmeister et al. (2022). We speculate that, in addition to other differences in the design and subject pool, our delegation frequencies arise in part from a fortuitous number of experts to choose from.
In contrast to our five experts, in Apesteguia et al. <|MaskedSetence|> <|MaskedSetence|> | **A**: (2022) there was no choice..
**B**: (2020) subjects could choose among 80 leaders and in Holzmeister et al.
**C**:
For us, the biggest surprise was how many of our investors delegated—roughly half of them.
The delegation frequencies are higher than those found elsewhere in the literature, e.g., 35% in Apesteguia et al.
| CBA | CBA | ABC | CBA | Selection 1 |
<|MaskedSetence|> Here, we give a quick review of ergodic theory. <|MaskedSetence|> Our basic references for ergodic theory are classical [Collet and Eckmann, 1980], [Day, 1998], and [W. de Melo, 1993]. <|MaskedSetence|> We stress that a deep result by Avila et.al. (Proposition 6.3) theoretically supports our argument.
. | **A**: If the reader is familiar with ergodic theory, skip Subsection 5.1.
**B**:
Our (numerical/theoretical) argument in this and the next sections use ergodic theory.
**C**: Note that our strategy (philosophy) in this and the next sections stems from [Lyubich, 2012] and [Shen and van Strien, 2014] (these are quite readable expository articles on recent developments of unimodal dynamics).
| BAC | BAC | BAC | BAC | Selection 3 |
As mentioned above, the WETH-USDC pool is a mature and highly liquid pool. One would expect it to perform much more efficiently than younger, mostly speculative pools with high volatility. To this end, we choose the WETH-PEPE pool which is a highly active pool with 1/10011001/1001 / 100 the liquidity of USDC (median of 0.24M0.24𝑀0.24M0.24 italic_M vs 22.5M22.5𝑀22.5M22.5 italic_M dollars) and about 10101010 times the volatility of USDC. Table 3 presents the regression results.
In general, the direction of the effects of the different factors on slippage is similar to what we see in the WETH-USDC pool, with the notable exception that the effect of gas price on adversarial slippage is positive and significant and the effect of slippage tolerance is significant and economically meaningful. <|MaskedSetence|> If, on average, the availability of profitable MEV opportunities does not change during high gas price, then we should expect to see less adversarial activity during times when gas price is high. Since, unlike the WETH-USDC pool, network congestion is likely not closely associated with profitable MEV opportunities in the PEPE pool, we see that adversarial slippage for PEPE is positively correlated with gas price. <|MaskedSetence|> <|MaskedSetence|> | **A**: This suggests that users (or the Uniswap Labs interface) are actively choosing risk tolerance levels with the expectation that slippage would be quite high, likely, due to the high price volatility..
**B**: As for slippage tolerance, the 25th percentile and median slippage tolerance values for the PEPE pool are 100 and 300, respectively.
**C**: The positive and significant coefficient on gas price for adversarial slippage is likely due to the fact that an increase in the cost of a transaction makes some adversarial strategies unprofitable.
| CBA | CBA | ACB | CBA | Selection 2 |
Having outlined the landscape of VASPs in Austria, we are now interested in understanding how they differ from traditional financial intermediaries.
Figure 4 stylizes the traditional financial intermediaries on the right and the VASPs on the left. In the middle, rectangles represent the primary economic services, and links indicate what services each intermediary category offers. The comparison shows that an analogy with traditional intermediaries exists for three out of the four groups described in Figure 3. <|MaskedSetence|> Indeed, the only service they offer is to buy and sell virtual assets for customers. VASPs in group 2 provide investment services to their users, akin to funds. Third, groups 3 (and 5) include VASPs allowing users to trade, keep their funds in custody, and thus act as brokers, connecting buyers and sellers to facilitate a transaction. <|MaskedSetence|> <|MaskedSetence|> Links point to the financial functions offered by each financial intermediary. VASPs are most similar to money exchanges, brokers, and funds, rather than banks. The colors in the circles highlight what traditional intermediary each group is most similar to.. | **A**: More specifically, VASPs in group 1 operate similarly to money exchanges.
**B**: Circles on the left represent VASPs, divided into groups as described in Figure 3, while on the right are traditional financial intermediaries.
**C**: The last group that provides payment services can be compared to payment processor systems.
Figure 4: Comparison of traditional financial intermediaries with VASPs.
| ACB | ABC | ACB | ACB | Selection 1 |
However, only few papers analyse the challenge of estimating the parameters of such dynamics. <|MaskedSetence|> There exist different methods of jump filtering. A first intuitive one is to settle a threshold, for example 3 standard deviations, such that data points within this threshold are considered to belong to the continuous part while the data points above correspond to the jumps. Cartea and Figueroa [26] and Pawłowski and Nowak [49] use an iterative method of filtering based on such threshold. <|MaskedSetence|> Figure 4 shows the residuals of Regression (7) filtered through the above method. <|MaskedSetence|> We explored this method but we hardly could estimate the jump parameters convincingly and found the filtering criteria rather arbitrary.
. | **A**: Indeed, in order to estimate the parameters of both the continuous and the spiked noise, we need first to be able to distinguish them.
**B**: The residuals categorized as continuous suit well the normal quantiles however the jumps are pretty sparse and, hence, difficult to fit.
**C**: However, the choice of the threshold seems arbitrary and integrates a standard deviation that itself combines continuous and spiked noises.
| ABC | ACB | ACB | ACB | Selection 2 |
We find that trading volume cannot be reconciled with the reported changes in open interest for the majority of these exchanges. <|MaskedSetence|> In our view, the most likely scenario is that both are true, perhaps, however, not to the same degree on every exchange. Although we could not perfectly reconcile these quantities for any of the exchanges in question, we find that there are discernible differences in behavior across these exchanges. The discrepancies on ByBit and OKX are so frequent and large in magnitude that these two exchanges merit a category of their own. On these exchanges we could not reconcile trading volume with reported open interest in any time period, with the implied trading volume being in the range of hundreds of billions over and above the reported trading volume, assuming the open interest is the quantity that is correct. If in fact, however, the trading volume is the more accurately reported quantity, this would imply that the open interest on these exchanges is almost completely fabricated. This could perhaps be explained by certain incentive structures baked into the scenario: leading market participants to believe that informed investors are taking large positions in these markets (as implied by the large change in open interest) could—depending on the participants’ prior positioning—lead to panic or fear of missing out on potential profits, thereby increasing trading volume, and profit for the exchange. <|MaskedSetence|> Figures 1 and 2 also seem to point in that direction.
Binance, Deribit and BitMEX form, conceptually, another cluster of exchanges. <|MaskedSetence|> The last group of exchanges is formed by Kraken and HTX, who have the lowest number of discrepancies. For these exchanges we could reconcile changes in open interest with trading volume on almost all sub-periods (see Tables 4 and 5).
. | **A**: Given that volatility and trading volumes in Bitcoin and other cryptocurrencies have been trending lower in 2023202320232023 we believe that the latter is a more plausible explanation.
**B**: Although we could not reconcile the changes in open interest with trading volume, the frequency and magnitude of the discrepancies is such that it leaves room for some relatively more benign explanation (see Section 5).
**C**: It is unclear whether this is due to delayed or unreported trading volume or due to incorrectly reported open interest.
| CAB | CAB | CAB | BAC | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> The Gini coefficient is briefly reviewed in Section 3 with particular focus on its invariance under a normalization of the equations of motion. In Section 4 it is proven both that the Gini coefficient increases monotonically in time under the induced dynamics and that its rate of increase may be bounded. This result is then re-stated for a more general class of evolutionary models. The evolutionary, integro-differential PDE are numerically solved to demonstrate the bound holding in experiment. Plots and descriptions of the numerical method are included. <|MaskedSetence|> | **A**: The variant of the Yard-Sale Model on which the present paper focuses is motivated and defined in Section 2.
**B**:
The paper is organized as follows.
**C**: The asymptotics of the modified system when a redistributive tax is incorporated are derived in Section 5 and shown to match the classical Yard-Sale Model with taxation.
.
| CAB | BAC | BAC | BAC | Selection 2 |
A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation value of an option. Inspired by his work, [2] and [3] further developed this idea by employing least-squares regression. Presently, the Least Squares Method (LSM) proposed by Longstaff and Schwartz has become one of the most successful methods for pricing American options and is widely used in the industry. In recent years, machine learning methods have been considered as potential alternative approaches for estimating the continuation value. <|MaskedSetence|> In subsequent content, we refer to algorithms that share the same framework as LSM but may utilize different regression methods as Longstaff-Schwartz algorithms. Besides estimating the continuation value, machine learning has also been employed to directly estimate the optimal stopping time [13] and to solve high-dimensional free boundary PDEs for pricing American options [14].
In this work, we will apply a deep learning approach based on Gaussian process regression (GPR) to the high-dimensional American option pricing problem. The GPR is a non-parametric Bayesian machine learning method that provides a flexible solution to regression problems. Previous studies have applied GPR to directly learn the derivatives pricing function [15] and subsequently compute the Greeks analytically [16, 17]. This paper focuses on the adoption of GPR to estimate the continuation value of American options. <|MaskedSetence|> [11] further explored the performance of GPR in high-dimensional scenarios through numerous numerical experiments. They also introduced a modified method, the GPR Monte Carlo Control Variate method, which employs the European option price as the control variate. Their method adopts GPR and a one-step Monte Carlo simulation at each time step to estimate the continuation value for a predetermined set of stock prices. In contrast, our study applies a Gaussian-based method within the Longstaff-Schwartz framework, requiring only a global set of paths and potentially reducing simulation costs. Nonetheless, direct integration of GPR with the Longstaff-Schwartz algorithm presents several challenges. <|MaskedSetence|> Second, GPR may struggle to accurately estimate the continuation value in high-dimensional scenarios, and we will present a numerical experiment to illustrate this phenomenon in Section 5.. | **A**: [10] initially integrated GPR with the regression-based Monte Carlo methods, and testing its efficacy on Bermudan options across up to five dimensions.
**B**: Examples include kernel ridge regression [4, 5], support vector regression [6], neural networks [7, 8], regression trees [9], and Gaussian process regression [10, 11, 12].
**C**: First, GPR’s computational cost is substantial when dealing with large training sets, which are generally necessary to achieve a reliable approximation of the continuation value in high dimensional cases.
| BAC | BAC | BCA | BAC | Selection 2 |
The study of elicitability is a fast growing field in statistics and at its core are scoring functions that incentivise truthful predictions and allow for forecast comparison, model comparison (backtesting), and model calibration [17, 12]. In sensitivity analysis, scoring functions are utilised for defining sensitivity measures which quantify the sensitivity of an elicitable risk measure to perturbations in the model’s input factors [13]. The most well-known family of scoring functions are the Bregman divergences that elicit the mean, where a functional is called elicitable if it is a minimiser of an expected score, see Definition 2.2. Other elicitable functionals are quantiles, expectiles, and shortfall risk measures; tools used in risk management. Scoring functions are by nature asymmetric, making them ideal candidates for asymmetric cost functions in the Monge-Kantorovich OT problem. <|MaskedSetence|> As a Bregman divergence elicits the mean and gives raise to a BW divergence, our new MK divergences can be seen as generalisations of BW divergences, and thus the Wasserstein distance. In addition to scoring functions that elicits the mean, we study scoring functions that elicit the quantile, the expectile, and law-invariant convex risk measures. Interestingly, we find that most of the introduced MK divergences are attained by the comonotonic coupling. Furthermore, as an elicitable functional possesses infinitely many scoring functions, and thus gives raise to infinitely many MK divergences, the comonotonic optimal coupling is typically simultaneously optimal. <|MaskedSetence|> Furthermore, we prove that MK divergences induces by any law-invariant elicitable coherent risk measure are attained by the comonotonic coupling. Finally, we provide two applications to robust stochastic optimisation. First, we derive sharp bounds on distortion risk measures when admissible distributions belong to a BW-ball around a reference distribution, thus significantly generalising recent results of [3], who solve this problem for the special case of a Wasserstein ball. Second, we find the cheapest payoff (reflecting terminal wealth) under the constraint that its distribution lies within a BW-ball around a benchmark distribution.
This paper is organised as follows. <|MaskedSetence|> Section 3 is devoted to MK divergences induced by elicitable risk functionals such as the quantile, expectile, and shortfall risk measure. We find that for distributions on the real line the majority of the new MK divergences are attained by the comonotonic coupling. Applications of the new divergences to risk measure bounds, significantly generalising recent results by [3], and portfolio management are provided in Section 4.
. | **A**: Using the celebrated Osband’s principle in statistics, we propose ways to create novel MK divergences that are attained by the anti- or comonotonic coupling.
**B**: Indeed, we propose novel asymmetric Monge-Kantorovich (MK) divergences where the OT cost functions are statistical scoring functions.
**C**: Section 2 introduces the MK divergences after reviewing the statistical concepts elicitability and scoring functions and the relevant topics in OT.
| BAC | BAC | BCA | BAC | Selection 1 |
Another research direction with fruitful outcomes is time-inconsistent control problem, where the Bellman optimality principle does not hold.
There are many important problems in mathematical finance and economics incurring time-inconsistency, for example, the mean-variance selection problem and the investment-consumption problem with non-exponential discounting. The main approaches to handle time-inconsistency are to search for, instead of optimal strategies, time-consistent equilibrium strategies within a game-theoretic framework. Ekeland and Lazrak [14] and Ekeland and Pirvu [15] introduce the precise definition of the equilibrium strategy in continuous-time setting for the first time. Björk et al. [5] derive an extended HJB equation to determine the equilibrium strategy in a Markovian setting. Yong [30] introduces the so-called equilibrium HJB equation to construct the equilibrium strategy in a multi-person differential game framework with a hierarchical structure. The solution concepts considered in [5, 30] are closed-loop equilibrium strategies and the methods to handle time-inconsistency are extensions of the classical dynamic programming approaches. <|MaskedSetence|> <|MaskedSetence|> The open-loop equilibrium control is characterized by a flow of FBSDEs, which is deduced by a duality method in the spirit of Peng’s stochastic maximum principle. Some recent studies devoted to the open-loop equilibrium concept can be found in [2, 3, 29, 18]. Specially, Alia et al. <|MaskedSetence|> | **A**: [3], closely related to our paper, study a time-inconsistent investment-consumption problem under a general discount function, and obtain an explicit representation of the equilibrium strategies for some special utility functions, which is different from most of existing literature on the time-inconsistent investment-consumption problem, where the feedback equilibrium strategies are derived via several complicated nonlocal ODEs; see, e.g., [26, 6].
.
**B**: [20] introduce the concept of open-loop equilibrium control by using a spike variation formulation, which is different from the closed-loop equilibrium concepts.
**C**: In contrast to the aforementioned literature, Hu et al.
| CBA | CAB | CBA | CBA | Selection 3 |
In regards to Figure 5, we conclude that the non-atomic arbitrage volume on DEXes is immense and controlled by a few large entities. To gain a better understanding of the evolution over time, we take a more in-depth look at the share of non-atomic arbitrage volume controlled by the large searchers in Figure 6. <|MaskedSetence|> <|MaskedSetence|> Generally, beaversearcher1 accounts for at least 10% of the volume except for during the two time periods characterized by very high cryptocurrency price volatility in the previous, i.e., mid-November 2022 and mid-March 2023. Potentially, the non-atomic arbitrage market becomes more competitive during days characterized by exceptional events in the blockchain ecosystem, and beaverseacher1 loses some market share as a result. The second biggest searcher, jumpsearcher, on the other hand, was operating since the merge but stopped in late July 2023, while searcher1, the third biggest searcher, only started operations in mid-March 2023 and from then on was operating throughout our entire data collection window. <|MaskedSetence|> Furthermore, there was a period of time between May and September 2023, when no searcher we could associate with rsyncbuilder was operating.
. | **A**: Interestingly, beaverseacher1, the biggest non-atomic arbitrage searcher, is the only major searcher operating through our entire data collection period.
**B**: Interestingly, the three searchers associated with rsyncbuilder all operate during non-overlapping time windows, it appears that one searcher is replacing the other – potentially upgrades to the searcher smart contract.
**C**: We start by noting that on nearly 75% of days, merely two searchers are responsible for at least 50% of the arbitrage volume – underlying the high concentration in the non-atomic arbitrage market.
| CAB | CAB | CAB | CAB | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> At the core of MarketSenseAI, the LLM generates concise summaries from vast amounts of numerical and textual data, extracting crucial insights about a company’s developments and stock potential. Subsequently, it analyzes these summaries, considering the investment horizon, to make investment suggestions on specific stocks. This dual-process approach, harnessing the summarization and analytical power of AI, offers a sophisticated tool for investors navigating the complexities of the stock market.
Furthermore, MarketSenseAI’s modular architecture allows for diverse applications in the financial domain. <|MaskedSetence|> It can facilitate the construction of AI-based portfolios using the generated signals and their explanations, offering a revolutionary approach to asset management. This framework can be tailored to make personalized investment decisions, taking into account user preferences on risk, investment horizon, and goal. This adaptability demonstrates the framework’s potential in various areas of finance, extending far beyond stock selection.
. | **A**: In essence, this paper pioneers the integration of multi-source data analysis with the cognitive capabilities of LLMs to redefine stock selection and portfolio management.
**B**: Each component of this architecture provides specific insights, such as news, fundamentals, and macroeconomic summaries, that can be exploited separately.
**C**: The resultant service not only enhances the quality of stock recommendations but ensures they are backed by robust, explainable reasoning.
| ACB | ACB | BAC | ACB | Selection 4 |
In Section 3, the local volatility model which circumvents potential issues with the randomisation formulation is constructed. <|MaskedSetence|> After enhancing the underlying probabilistic framework to allow for the stochastic switching times, we follow the previously established procedures of Sections 2 and 3 in constructing composite processes, transforming these into local volatility models and obtaining their characteristic function. <|MaskedSetence|> <|MaskedSetence|> The results are summarised in Section 7 and additional proofs are given in the appendix.
. | **A**: Furthermore, we illustrate a financial application by solving the pricing problem of a European option with an underlying that is modelled using the proposed local volatility models.
**B**: Here, we distinguish between two types of stochastic switching, involving a fixed and a random number of switches between regimes.
In Section 5, we propose a Markov-modulated randomised framework in which the regime switches are driven by an underlying Markov chain and obtain the characteristic function of the underlying process.
Numerical results are showcased in Section 6, featuring trajectories of both the local volatility and stochastic switching models.
**C**: We obtain error bounds for its density approximation compared to the randomised model as well as its characteristic function.
The extension to regime switches at stochastic times is the subject of Section 4.
| CBA | CBA | CBA | BCA | Selection 2 |
<|MaskedSetence|> This is one reason why efficiency gains are small to negligible. Investments do not meaningfully affect expert fraud, which may be partially driven by very high consumer participation rates. Throughout the experiment, 95%-98% of consumers enter the market, which speaks to the potential efficiency of uncertain credence goods markets under verifiability. Consumers react negatively towards undertreatment, but not overtreatment, and generally do not reward expert investments. <|MaskedSetence|> Instead, they avoid high prices with expert-incentives to under-treat, but approach lower prices with expert-incentives to over-treat. This strongly reduces investment incentives.
Our findings provide first evidence on the efficacy of algorithmic decision aids on credence goods markets with obfuscated, heterogeneous expert abilities. We hope they provide some useful orientation for future research. In particular, there is much to learn about the relationship between expert investments, technology, and consumer beliefs. <|MaskedSetence|> However, carefully isolating and quantifying the mechanism behind (a) consumer reactions towards experts with decision aids, and (b) high-ability experts shunning of decision aids, is crucial to better understand the transformative impact of technology on credence goods markets. This necessitates a better understanding of the beliefs and expectations of experts and consumers alike. One positive implication of our research is that high-ability experts may be dis-incentivized to offer additional tests with low marginal benefits. On the other hand, we find clear signs of under-investment, with no indication that consumers incentivize experts towards the optimal diagnostic path.. | **A**: In contrast to the predictions of the standard model, consumers also do not appear to reward expert prices that signal honesty.
**B**: For example, in this paper, we did not collect data on consumer beliefs.
**C**:
Overall, experts in our experiment substantially under-invest into new diagnostic technologies.
| ACB | CAB | CAB | CAB | Selection 2 |
README.md exists but content is empty.
- Downloads last month
- 38