shuffled_text
stringlengths 279
2.02k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|
**A**: (2021) constructed some general continuous-time equilibrium dynamic risk measures through using a adapted solution to a backward stochastic Volterra integral equation. Chen et al. (2018) and Sun et al. (2018) extended convex risk measures to loss-based cases. More recent research on dynamic risk measures reference in Chen et al. (2021), Chen and Feinstein (2022), Mastrogiacomo and Rosazza (2022), Yoshioka and Yoshioka (2024)**B**: (2006) considered dynamic coherent, convex monetary and monetary risk measures for discrete-time processes modelling the evolution of financial values. Acciaio et al. (2012) extended dynamic convex risk measures in Cheridito et al. to take the timing of cash flow into consideration. Sun and Hu (2018) introduced a new class of set-valued risk measures which satisfies cash sub-additivity and investigated dynamic set-valued cash sub-additive risk measures. Wang et al**C**: Over the past two decades, not only has the study of static risk measures flourished, but also dynamic theories of risk measurement have developed into a thriving and mathematically refined area of research. Dynamic risk measures represent a sophisticated and evolving field within risk management, extending the analysis beyond static frameworks to account for temporal changes in risk. Unlike traditional static risk measures that provide a snapshot assessment, dynamic risk measures recognize the fluid nature of financial markets and aim to capture how risk evolves over time. Introduced by Riedel (2004), dynamic coherent risk measures offer a framework that allows for a more nuanced understanding of risk dynamics. This advancement enables a comprehensive assessment of risk in the context of changing market conditions and evolving investment portfolios. Additionally, the introduction of dynamic convex risk measures by Detlefsen and Scandolo (2005) further enriched the field, providing insights into the time consistency properties of risk measures over different time horizons.
Cheridito et al | BCA | BCA | CBA | BAC | Selection 3 |
**A**: We follow this convention to define the Asian delta value and the European delta value as
**B**: In this section, we present the short-maturity asymptotic for the sensitivity of the option with respect to the initial value S0subscript𝑆0S_{0}italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT**C**: In many studies, this sensitivity is referred to as delta | BAC | BCA | CAB | CBA | Selection 3 |
**A**: 4 we tackle the motivating problem: moving from [17], we extend their results by providing, using the previously developed theory, an analysis of the time variability of sensitivities in time, as well as a quantification of the statistical significance and an analysis of its sparsity**B**: Sec. 5 concludes and devises additional research directions.
**C**: Finally, in Sec | ACB | BAC | ACB | BCA | Selection 4 |
**A**: For example, in bond markets, zero-coupon bonds are parametrized by their maturities θ𝜃\thetaitalic_θ which is a continuous parameter**B**:
Our setting is also different from “large financial” market models, where there is a continuum of securities**C**: However, only a finite number of bonds are traded at the same time, see [47], [58], [4]. | BCA | ABC | BAC | CBA | Selection 3 |
**A**: We close by commenting on certain aspects of our approach.
**B**: Our main theme has been how learning turns jointly on preferences and information when there are multiple states**C**: This paper has studied a general model of sequential social learning on observational networks | BAC | BAC | CBA | BCA | Selection 3 |
**A**:
Specifically, we study a model of scientific communication in which a benevolent social planner chooses norms with respect to MHT adjustments, taking into account the way this shapes researchers’ incentives. The model embeds two core ideas**B**: First, social welfare is (potentially) affected by the summary recommendations (in particular, hypothesis tests) contained in research studies, as well as by the production of new knowledge per se.111We describe the case where hypothesis rejections lead to changes in welfare relative to the status quo, but under a straightforward reinterpretation, the framework can also accommodate situations in which “precise null” results affect welfare. Second, while this makes the research a public good, the costs of producing it are borne privately by the researcher**C**: She decides whether or not to incur these costs and conduct a (pre-specified) experiment based at least in part on the private returns to doing so. The planner must, therefore, balance the goals of (i) motivating the production of research and (ii) limiting the possibility of harm due to mistaken conclusions. We represent these preferences with a utility function that includes both ambiguity-averse and expected-utility components (as, for example, in Gilboa and | BCA | ABC | BCA | CAB | Selection 2 |
**A**: (p. 276)
**B**: It also surpasses Anthony Wrigley’s estimate that matching the annual energy output of Britain’s coal industry circa 1815 would have required that the country magically receive 15,000,000 additional acres of forest. If we add cotton, sugar, and timber circa 1830, we have somewhere between 25,000,000 and 30,000,000 ghost acres, exceeding even the contribution of coal by a healthy margin**C**: …[R]aising enough sheep to replace the yarn made with Britain’s New World cotton imports by would have required staggering quantities of land: almost 9,000,000 acres in 1815, using ratios from model farms, and over 23,000,000 acres in 1830. This final figure surpasses Britain’s total crop and pasture land combined | BAC | BCA | CBA | CAB | Selection 3 |
**A**:
There has been prior work examining both the extension of public goods to static (exogenous) networks, and the provision of public goods on endogenous networks**B**: In particular, Bramoullé et al**C**: (2007) launched research into this environment, by showing that given a network shape, specialized Nash equilibria (in which a small subset of players provide effort, and those connected to them free-ride) tend to be not only stable but also the most efficient in terms of overall welfare. Further, they find that, in an environment with agent heterogeneity, these specialized equilibria are often unique. This is especially relevant since, in our endogenous network environment, an agent’s marginal cost of effort is dependent on their position in the network, and in particular how many others they choose to share with. | ACB | ABC | ACB | CAB | Selection 2 |
**A**:
In general, risk-averse MDPs seldom admit closed-form solutions, and we typically rely on numerical methods to approximate the optimal policy**B**: The numerical methods for solving risk-averse MDPs, particularly in a model-agnostic context, are currently under development**C**: We refer to, e.g., [59, 42, 45, 24, 25], and the reference therein for the recent progress. We also provide some discussion on the relevant methods at the end of Section 7.2 and Section 7.3. | CAB | ABC | BCA | BAC | Selection 2 |
**A**: While most inventory management models consider the case of backordering, especially in the practice of grocery retailing the assumption of lost sales is more realistic, leading to models that are in general more difficult to solve (see Bijvank and Vis,, 2011 for a review on inventory models with lost sales)**B**: Under the assumptions of the newsvendor model, the cost-optimal inventory level can be derived by the ratio between costs for excess inventory and costs for excess demand. In their review, Qin et al., (2011) give suggestions for future research based on the newsvendor model, such as the integration of stochastic supply and demand in the same model as well as the introduction of (stochastic) lead times and multi-period models.**C**:
In case of stochastic customer demand, two situations can arise at the end of a demand period: (1) demand exceeds the inventory level leading to either lost sales or backordering of orders and (2) excess inventory | CAB | ACB | BCA | CBA | Selection 3 |
**A**:
External Debt (% GDP): Total external debt is debt owed to nonresidents repayable in currency, goods, or services**B**: Total external debt is the sum of public, publicly guaranteed, and private nonguaranteed long-term debt, use of IMF credit, and short-term debt**C**: Short-term debt includes all debt having an original maturity of one year or less and interest in arrears on long-term debt. Source: International Debt Statistics. | BAC | ABC | ACB | ACB | Selection 2 |
**A**: Then, we briefly introduce the signature and we show how it allows to construct a RKHS that we can use to make the MMD a metric able to discriminate two probability measures defined on the bounded variation paths quotiented by some equivalence relation. Finally, the statistical test underlying the signature-based validation is introduced. In what follows, 𝒳𝒳\mathcal{X}caligraphic_X is a metric space.
**B**: In this section, we start by introducing the Maximum Mean Distance (MMD), which allows to measure how similar two probability measures are**C**: Secondly, Reproducing Kernel Hilbert Spaces (RKHS) are presented as they are key to obtain a simple formula for the MMD | CAB | ACB | BAC | ACB | Selection 1 |
**A**:
The anchoring effect refers to “a systematic influence of initially presented numerical values on subsequent judgments of uncertain quantities,” where the judgement is biased toward the anchor (Teovanović (2019))**B**: The anchoring effect has been replicated across a variety of contexts, as I discuss in Sect. 1.1, including with judgements involving money and anchors established by government policy**C**: Given the prevalence of the anchoring effect, one would expect to find an anchoring bias generated by one of the most controversial figures of the economy: the minimum wage. | CBA | ABC | ACB | ACB | Selection 2 |
**A**: Therefore, it is tough to reflect the complexity and diversity in real transactions simply by homogeneous graphs.
Meanwhile, heterogeneous graph [13] is a widely used technique to model complex interactions, which can preserve the semantic information of interactions to the greatest extent.**B**: However, real transactions on Ethereum generally involve different types of interactions between different types of accounts, which will be neglected during homogeneous modeling**C**: Existing Ponzi scheme detection methods based on graph analytics generally rely on homogeneous graph modeling [10, 11, 12] due to their simplicity | CAB | BCA | CAB | CBA | Selection 4 |
**A**: If the discovery stage is successful, the firm can start testing the drug candidate in humans through three stages of clinical trials**B**: In Phase II, firms test the drug’s efficacy on a larger sample of individuals with the targeted diseases. Finally, in Phase III, the firm conducts double-blinded tests to assess the drug’s effectiveness on a large sample of patients.
**C**: Phase I involves screening the drug for possible toxicity using a small sample of healthy subjects | BCA | BAC | ACB | ABC | Selection 3 |
**A**: i𝑖iitalic_i votes at all q′(i)>q(i)superscript𝑞′𝑖𝑞𝑖q^{\prime}(i)>q(i)italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_i ) > italic_q ( italic_i ))**B**: As we document in the Appendix,
individual behavior in the experiment is mostly monotonic**C**: There is | CBA | BAC | ACB | ABC | Selection 4 |
**A**: Recently, there has been new approaches to the quantum state preparation problem in the literature, that are not related to the qGAN or VQE methods reviewed above. In [iaconis2023quantum], the authors considered the quantum state preparation problem for probability distribution with smooth differentiable density functions, such as the normal distribution, where they proposed an algorithm based on the matrix product state (MPS) approximation method, and provide an error analysis and numerical convergence for the single-variate normal distribution**B**:
For general probability distributions, loading of its discretized probability density function (PDFs) remains one of the main problems in quantum computing. In the quantum computing literature, this step is also referred to as quantum state preparation, and it is an important initialization step for many quantum algorithms for pricing options**C**: In [pracht2023pricing], the author proposed a quantum binomial tree algorithm to approximate the option prices in a discrete time setting. We refer the reader to [chang2023novel] for a similar random walk based algorithm, and to [de2023quantum] for a hybrid classical quantum approach based on deconvolution methods for the quantum state preparation problem. However, to the best of our knowledge, there seems not to be any result in the literature that provides rigorous upper bounds on the quantum circuit complexities and as well as convergence for general multi-variate distributions. | BAC | ABC | ABC | CBA | Selection 1 |
**A**: It is also verified therein that the expected total capital injection is bounded. Finally, the proof of an auxiliary lemma is reported in Appendix A.
**B**: The rest of the paper is organized as follows. In Section 2, we introduce the auxiliary state processes with reflections and derive the associated HJB equation with two Neumann boundary conditions for the auxiliary stochastic control problem**C**: In Section 3, we address the solvability of the dual PDE problem by verifying a separation form of the solution and the probabilistic representations, the homogenization of Neumann boundary conditions and the stochastic flow analysis. The verification theorem on the optimal feedback control is presented in Section 4 together with the technical proofs on the strength of stochastic flow analysis and estimations of the optimal control | ABC | BAC | ACB | CAB | Selection 4 |
**A**: The four different decarbonization strategies shown in Fig**B**: 3 display characteristic patterns for the saved CO2 emissions and the expected job and output loss curves.
For the ‘Remove largest emitters first’ strategy in Fig**C**: 3A firms are ordered according to their descending CO2 rank which is reflected in the cumulative emission savings curve (brown). | CAB | ABC | CBA | BAC | Selection 2 |
**A**: However, this would require formulating a set of separate subproblems for each of the GenCos at the lower level which would further increase the computational burden. An alternative avenue for future research could involve the integration of storage batteries into the grid. This would potentially alleviate the intermittency problem regarding the availability of variable renewable energy sources and hence provide new insights, thereby offering novel perspectives on the outcomes of numerical experiments**B**:
Regarding further research, one could pinpoint a few possible directions. The first one is related to considering an imperfectly competitive market defining a Cornout oligopoly (Ruffin, 1971) instead of perfect competition. The imperfectly competitive market structure may more closely represent the reality (Oikonomou et al., 2009) as the oligopolies have access to information helping them in the decision-making process**C**: However, one should bear in mind that this would also most probably further complicate the computational tractability of the problem. Another possible enhancement could stem from the development of an efficient solution method that allows one to consider a continuous range of investment decisions for the TSO at the upper level. Lastly, if the modelling simplifications caused by limitations of state-of-the-art solvers are overcome one could investigate how policy-related insights differ in case the model allows for a higher level of detail for energy system representation, investment projects, uncertainty formulation and the planning horizon. | BAC | ABC | BCA | CBA | Selection 1 |
**A**: to obtain N𝑁Nitalic_N by Inequality (3.11)**B**: CPU time is measured
in milliseconds**C**: The CPU time to estimate ‖f(k+1)‖∞subscriptnormsuperscript𝑓𝑘1\|f^{(k+1)}\|_{\infty}∥ italic_f start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT | BAC | BAC | ACB | ABC | Selection 4 |
**A**: In order to see that the optimal strategy is unique (up to sets of measures zero) note that optimal strategies have to satisfy the Bellman optimality principle (this has to be shown, but is standard, see [39], Thm**B**: 3.3.1). Since we have already computed the value function, this necessarily implies that the optimal strategy is given by extremal points in the HJB equation (up to sets of measures zero). Since these maximum points are unique, the statement follows.
∎**C**: A standard verification theorem (see for example [8], pp.280-282, [39], [21] for similar versions) concludes our proof | ACB | BCA | BAC | CBA | Selection 2 |
**A**: In an ideal setting we would try to calculate the exact P content of each trade relationship that each country has with each other country for each goods category. This, however, is not feasible, due to resource constraints and data availability. We can however obtain the approximate P content indirectly by estimating the optimal weighting scheme that results in the flow matrix that gives us the most likely actual P flows between countries.
**B**: In particular, we have to investigate if the weighting scheme in the calculation of the trade matrix (see eq**C**: 4) can be improved | CAB | BAC | BAC | CBA | Selection 1 |
**A**: We ponder over two situations during a period under study: a market crash generating substantial losses and a general market characterised by moderate gains**B**: Unlike Section 4 where all results are model-based, Section 5 is devoted to backtesting of real-world benefits for holders of EPSs using historical data for S&P 500 and S&P/ASX 200 indices and hence results and conclusions are based on model-free market data**C**: An analysis of the first situation shows that an EPS can provide highly effective protection against large losses, whereas the second situation illustrates the tradeoff between loss protection and profit sharing. The initial premia for EPS products used in our analysis can be obtained directly from the market data for European call and put options, as opposed to calculations based on a hypothetical model. The conclusion from
backtesting is that an EPS can serve as an efficient tool to mitigate financial risks associated with a superannuation account. | CAB | BAC | BCA | ABC | Selection 2 |
**A**: It would be interesting to run formal statistical tests to check whether the differences reported among the different groups in this Section are statistically significant or not. However, since we can not implement an interacted model, that is simply not possible with the current estimator, we do not know the covariance between the estimates for the different groups**B**: One solution would be that of a bootstrap test, i.e. running each estimate, say, at least 100 times, each time on a different generated random sample, for each subgroup, in order to get the bootstrapped
distribution of the differences of each coefficients (or an estimate of the covariance)**C**: Since the estimation of a single event study with the Callaway Sant’Anna procedure is quite computationally intensive (i.e. about 23 hours on a standard server for each subgroup and each variable), we deem that running such a bootstrap test is unfortunately infeasible, as total running time would be about 300 days. | BCA | BAC | BAC | ABC | Selection 4 |
**A**: This Bayesian approach to calibration allows a joint estimation of latent factors, taking into account possible interdependencies and also avoids the need to make strong apriori assumptions such as setting tresholds for jump sizes (cf**B**: The two models are then calibrated to three different data sets, the 2018-21 data, the 2021-23 data and the whole 2018-23 data using Markov Chain Monte Carlo methods**C**: [15]). For each of these data sets, we provide model parameters along with simulations of the spot price and assessment of model adequacy through posterior predictive checking.
| CAB | CBA | BAC | CBA | Selection 3 |
**A**: For each stock at date t𝑡titalic_t, Alpha360 looks back 60 days to construct a 360-dimensional vector as the raw feature of this stock. We use the stock price trend defined in Definition 2 as the label for each stock.**B**:
We use the stock features of Alpha360 in the open-source quantitative investment platform Qlib (Yang et al., 2020)**C**: Alpha360 contains 6 indicators on each day, which are opening price, closing price, highest price, lowest price, volume weighted average price (VWAP) and trading volume | BAC | ACB | BAC | CAB | Selection 4 |
**A**: Subsequently, I assess the post-IPO sentiment trajectory at the firm’s level, illustrating that companies with initial high enthusiasm consistently garner more optimism. This trend stands out, especially when juxtaposed with the long-term under-performance of these entities. This amplified enthusiasm is mirrored in post-IPO conversations, highlighting that firms with strong pre-IPO buzz are more likely to be the topic of discussion.
**B**: The data indicates a consistent bullish sentiment among investors engaging with multiple IPOs. However, for users interacting with a broader spectrum of IPOs, there is a clear temporal change, hinting at a move towards caution, perhaps steered by the trends seen in IPO outcomes**C**: Refining my focus on the qualitative nuances of investor communication, I observe that messages rich in financial insights and those that reinforce existing information have a marked influence on stock returns. Further, I explore the continuity of pre-IPO enthusiasm at the individual investor level | CBA | ACB | CAB | BAC | Selection 1 |
**A**:
In the second use case, we aim to explore the performance of neural network models for credit risk assessment by incorporating ideas from quantum compound neural networks [17]**B**: These orthogonal layers, which can be trained efficiently on a classical computer, are the simplest case of what we call compound neural networks, which explore an exponential space in a structured way. For our use case, we design compound neural network architectures that are appropriate for financial data. We evaluate their performance on a real-world dataset and show that the quantum compound neural network models both have far fewer parameters and achieve better accuracy and generalization than classical fully-connected neural networks.**C**: We start by using quantum orthogonal neural networks [17], which add the property of orthogonality for the trained model weights to avoid redundancy in the learned features [18] | BAC | ABC | ACB | BAC | Selection 3 |
**A**: As explained in liu2023rethinking , mGB2 and GB2 are equivalent since q𝑞qitalic_q and p𝑝pitalic_p are independently defined at this level GB family of distributions and q𝑞qitalic_q can be shifted by unity in the definition of mGB2/GB2**B**:
Below, we will use (7) to fit CCDF of distributions of RV**C**: Consequently, we choose a more familiar CCDF of GB2 | BCA | BAC | BCA | CAB | Selection 2 |
**A**: These are chosen to represent right-skewed, left-skewed, and centered but differently concentrated value distributions. We also re-ran all simulations with standard reserve prices instead of soft-floors, as as in the preceding section.**B**:
The caveat is that the preceding results are based on a specific distribution of values for the regular bidders—a uniform distribution**C**: To explore the robustness of the conclusions, we re-run all simulations for a sample of discretized Beta distributions with different parameter values | CAB | CBA | BCA | BAC | Selection 1 |
**A**:
Directly linked to the last observation, Figuieres et al. (2012) highlight that many individuals condition their decisions on the observation of others’ decisions in finitely repeated simultaneous public goods games (see Fischbacher and Gächter 2010; Keser and van Winden 2000). Extensive experimental research in public goods games has shown that a large proportion of subjects’ behaviour deviates from the predictions of Nash equilibrium and they mostly behave in a conditionally co-operating manner**B**: The latter implies that their contribution is positively correlated to their beliefs about the contributions of the other members of the group. Fischbacher et al. (2001) was among the first studies to classify subjects to different types of decision makers, including the selfish type (free-rider), the reciprocator type, and the type applying a hump-shaped strategy (contributions that are first increased then decreased in others’ contributions)**C**: This methodology was later extended in Bardsley and Moffatt (2007), refined in Thöni and Volk (2018), and recently used in Katuščák and Miklánek (2023), while Préget et al. (2016) use this in a sequential leader-followers public goods game. In addition to testing the predictions of G&M, we use structural econometric modelling to test the presence of alternative behavioural types of agents in our data. In particular, using the Strategy Frequency Estimation Method (SFEM), an estimation procedure introduced in Dal Bó and Fréchette (2011) and Fudenberg et al. (2012), we explore the presence of free-riders, altruists and conditional co-operators in our data, on top of those behaving as the G&M model prescribes. | ABC | ACB | CAB | ACB | Selection 1 |
**A**: This perspective allows us to model potential financial outcomes within the quantum framework, thus providing a more nuanced understanding of market dynamics**B**: Equation 1**C**:
In leveraging the principles of quantum mechanics for financial market analysis, we adopt a novel approach by interpreting states of financial uncertainty through the lens of quantum states | ACB | BAC | BCA | CAB | Selection 3 |
**A**: (If anything, we’d expect risk to be at least as salient in Simple as in Complex, since the latter offers diversification opportunities.)
Hence Hypothesis 4a suggests a less dramatic contrast (with ambiguous sign) between Simple and Complex than between Trivial and the other two tasks.**B**: As the names suggest, the Complex task seems much more cognitively (and mechanically) demanding than the other two tasks, so parts (a) and (b) of the decision costs Hypothesis 3 suggest that the contrast between Complex and Simple could be more dramatic than the contrast between Simple and Trivial**C**: On the other hand, risk attitudes play a role only in the Complex and Simple tasks | CAB | ACB | BCA | BAC | Selection 1 |
**A**: By the Sharkovsky order in [Sharkovsky, 1964], we know that if the map f𝑓fitalic_f has a cycle of period three, then it also has a cycle of any odd order**B**: Thus, it is natural to guess that if λ𝜆\lambdaitalic_λ is close to the lower bound for λ𝜆\lambdaitalic_λ in Theorem 1.2 (but still above the lower bound), the map f𝑓fitalic_f has an odd period cycle but no period three cycle. We give one example where this is actually the case. Using our concrete characterisation of the existence of an odd period cycle, we obtain:**C**:
To end this section, we give an application of Theorem 1.2 | BAC | CBA | ABC | BCA | Selection 4 |
**A**:
In general, the direction of the effects of the different factors on slippage is similar to what we see in the WETH-USDC pool, with the notable exception that the effect of gas price on adversarial slippage is positive and significant and the effect of slippage tolerance is significant and economically meaningful. The positive and significant coefficient on gas price for adversarial slippage is likely due to the fact that an increase in the cost of a transaction makes some adversarial strategies unprofitable**B**: As for slippage tolerance, the 25th percentile and median slippage tolerance values for the PEPE pool are 100 and 300, respectively. This suggests that users (or the Uniswap Labs interface) are actively choosing risk tolerance levels with the expectation that slippage would be quite high, likely, due to the high price volatility.**C**: If, on average, the availability of profitable MEV opportunities does not change during high gas price, then we should expect to see less adversarial activity during times when gas price is high. Since, unlike the WETH-USDC pool, network congestion is likely not closely associated with profitable MEV opportunities in the PEPE pool, we see that adversarial slippage for PEPE is positively correlated with gas price | BAC | BCA | ABC | ACB | Selection 4 |
**A**: As for VASP-12, the Bitcoin addresses we gathered through manual transactions identify clusters whose transaction history only dates back to a few months (mid-2021)**B**: Similarly to VASP-9, we could not collect sufficient data to obtain comparable values to the figures reported in the balance sheets**C**: Again, this highlights that re-identification is less effective for Bitcoin than Ethereum addresses.
| ACB | CBA | ABC | BAC | Selection 4 |
**A**:
see, e.g., [14]**B**: This is not sufficient for our purposes as we need on one hand not weakly, but Pareto optimal points, and on the other hand, we need the set of all Pareto optimal points (or ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points)**C**: However, we will see that such a polyhedral approximation (7) will be a first step to reach our goal. | BAC | BAC | ABC | BCA | Selection 3 |
**A**: This phenomenon has already been observed by several papers. Following this observation, some suggest to distinguish week and week-end effects [45] [46] while others show statistical significance of daily dummy integration [19]**B**: We also considered alternatives such as additional weekly seasonality terms. The form of μ(⋅)𝜇⋅\mu(\cdot)italic_μ ( ⋅ ) minimising the AIC criteria turned to be the defined in Equation (1).**C**: Following minimisation of Equation (7), we check significance of the coefficients and residual plots. While all coefficients show 5% significance, residual plots are less satisfactory.
Indeed, as shown in Figure 2, residuals show important weekly dependencies | CAB | ACB | CBA | BCA | Selection 4 |
**A**: Arguably, hiding trading volume is not in the best interest of exchanges because they compete for liquidity, and trading volume is the single most important measure that market participants consider when gauging the liquidity of an exchange. Although this argument applies to regular trading volume, the same cannot necessarily be claimed for trading volume that has an associated conflict of interest, e.g., block trades by big clients or liquidations that could be seen, if frequent, as embarrassing to the exchange.**B**:
This is to say that the observed trading volume for a time period P𝑃Pitalic_P must be greater than or equal to the minimal trading volume required to produce the observed open interest total variation in the same period**C**: If Eqn. (5) does not hold at given time period P𝑃Pitalic_P, this can be due to: i) not all trading volume being reported, ii) the open interest being incorrect, or iii) some combination of i) and ii) | CBA | BCA | ACB | CAB | Selection 4 |
**A**: Whereas here the Gini – Gini production bound gives a maximum rate of increase for the measure of economic inequality.**B**:
That is to say, the Gini – Gini production relation derived below is similar to entropy – entropy production inequalities for diffusive PDE**C**: For the systems examined in [24], the entropy production bound gives a minimum rate of decrease for the relative entropy | CBA | BAC | BAC | CAB | Selection 4 |
**A**: The numerical experiments show that the proposed approach outperforms the least-squares method in high-dimensional settings, and doesn’t required specific selection of hyper-parameters in different scenarios. Additionally, it maintains a stable computational cost despite increasing dimensions. Therefore, this method holds promise as an effective solution for mitigating the curse of dimensionality.**B**:
Valuing an American option involves an optimal stopping problem, typically addressed through backward dynamic programming. A key idea is the estimation of the continuation value of the option at each step**C**: While least-squares regression is commonly employed for this purpose, it encounters challenges in high-dimensions, including a lack of an objective way to choose basis functions and high computational and storage costs due to the necessity of calculating the inverses of large matrices. These issues have prompted us to replace it with a deep kernel learning model | CAB | ABC | BCA | CBA | Selection 1 |
**A**:
Osband’s principle can be used to create new MK divergences**B**: Here, we give an example of the reciprocate of risk functionals.**C**: Indeed any strictly monotonic transformation of an elicitable risk functional leads to a new MK divergence, where the optimal coupling follows from Proposition 3.17 | BCA | ABC | CBA | ACB | Selection 4 |
**A**: In the special case of a single stock, the DF equilibrium strategy takes the same form as the open-loop equilibrium in [3], but with a modified risk tolerance. This effective risk tolerance parameter has already appeared in some works on a similar topic but with a time-consistent model; see, e.g., [23, 19]. Moreover, in the case with constant discount rate, the equilibrium reduces to the solution of classical Merton problem, which indicates that the equilibrium concept in our paper is the natural extension of equilibrium in classical time-consistent setting to time-inconsistent setting. Third, our work also provides a new explicitly solvable mean field game model**B**: The contributions of our paper are as follows: first, as far as we know, our work is the first paper to incorporate consumption into CARA portfolio game with relative performance concerns. Portfolio games under relative consumption are generally underexplored, with the exception of [22], which focuses solely on CRRA utilities under zero discount rate. Our paper fills this gap in the literature. Second, our work can be viewed as a game-theoretic extension of the exponential utility case presented in [3]**C**: Since the pioneering works by [24, 21], MFGs have been actively studied and widely applied in economics, finance and engineering. To name a few recent developments in theories and application, we refer to [12, 8, 13, 9, 10] among others. However, few studies combine the MFGs with time-inconsistency problem, except some linear quadratic examples; see, e.g., [28, 4]. Our result adds a new explicitly solvable non-LQ example to intersection of these two fields.
| ABC | BAC | BCA | ACB | Selection 2 |
**A**: Figure 13(b)). Finally, we find a fourth, previously not identified, builder that is likely running an integrated non-atomic arbitrage builder: builder1. To be precise, 83% of the non-atomic arbitrage volume by builder1seacher is located in blocks built by builder1seacher.
**B**: However, we are able to establish a link between rsyncsearcher1 and rsyncsearcher2, as the bytecode of their contracts is identical and they are thus likely to be the same entity as established in previous work [20]. We further comment that rsynbuilder was largely not operating at the same time as rsyncsearcher1 (cf**C**: Similarly, 83% of the non-atomic arbitrage volume of mantasearcher is found in blocks built by the mantabuilder, while 88% of volume from the rsyncsearcher2 and an astonishing 98% of volume from the rsyncsearcher3 is included in blocks by the rsyncbuilder. Note that there is no link between rsyncsearcher1 and rsyncbuilder in Figure 10 even though our naming suggests there to be a link | CBA | BAC | ABC | ABC | Selection 1 |
**A**: (2023) in several tasks including named entity recognition and news classification. Notably, while FinBERT exhibits superiority in financial sentiment analysis over ChatGPT, the research lacks prompt engineering and utilizes a dataset that inherently favors FinBERT**B**:
In comparative evaluations, Li et al. (2023) posits that ChatGPT and GPT-4 surpass domain-specific models like FinBERT Araci (2019) and BloombergGPT Wu et al**C**: In contrast, Fatouros et al. (2023b) presents evidence that ChatGPT outperforms FinBERT in financial sentiment analysis both in terms of classification performance and correlation with actual returns, even when applied with zero-shot prompting. Zero-shot prompting enables ChatGPT to perform tasks without prior specific training which indicates that it can be effective in sentiment analysis based on its comprehensive training, despite no explicit financial data training. | ABC | CAB | BAC | BCA | Selection 3 |
**A**: Departing from deterministic switching times, we advance to more sophisticated models based on stochastic switching times and a Markov-modulated model**B**: In all cases, characteristic functions of the processes are obtained.
**C**: Multiple models are proposed in this article which vary in how the composite process is obtained from the component processes | ACB | ABC | BAC | BCA | Selection 4 |
**A**: Note the absence of the discount term in the PDE Eq**B**: (2.7) as a consequence of the absence of discounting in Eq**C**: (2.6).
Depending on whether the futures references forward or backward rates, the payoff may depend on state variables at T𝑇Titalic_T only, or also at some t≤T𝑡𝑇t\leq Titalic_t ≤ italic_T. | BCA | ABC | CAB | CBA | Selection 2 |
**A**: The dependent variable is an ordinal variable that captures the number of consumers (0 - 3) who approached the expert in the current round**B**:
This table reports results of a panel ordered logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses)**C**: Undertreated and Overtreated are lagged variables (one round). | BCA | ABC | BAC | BCA | Selection 3 |
README.md exists but content is empty.
- Downloads last month
- 45