Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
75032
|
1
| null | null |
0
|
31
|
In the formula for the VIX we have that the spot value for the VIX is: [](https://i.stack.imgur.com/nr5t3.png)
The first part of this equation is exactly the formula for the price of a variance swap which is the present value of realized close-to-close variance (Really the price of a replication portfolio of options)
But the secound term when we subtract the quotient of the future and the strike below the Forward seems to be a random addition. What is the intuition behind adding this term? [https://cdn.cboe.com/api/global/us_indices/governance/Volatility_Index_Methodology_Cboe_Volatility_Index.pdf](https://cdn.cboe.com/api/global/us_indices/governance/Volatility_Index_Methodology_Cboe_Volatility_Index.pdf)
|
What is the intuition behind the VIX formula offset term
|
CC BY-SA 4.0
| null |
2023-03-27T06:33:35.990
|
2023-03-27T06:33:35.990
| null | null |
66804
|
[
"volatility",
"implied-volatility",
"derivatives",
"variance",
"vix"
] |
75033
|
1
| null | null |
0
|
23
|
I have some knowledge about the fabrication of a stochastic differential equation (SDE) governing asset price ($S(t)$) dynamics (This [answer](https://math.stackexchange.com/questions/4493155/analytical-view-on-stochastic-differentiability) helped me up to some extend).
For instance, the Geometric Brownian Motion (GBM)
$$d S(t) = \mu S(t) dt + \sigma S(t) dz,$$
with the drift $\mu = \nu + \frac{1}{2}\sigma^2$, where, the mean $ν$ and standard deviation of $σ$ stands for the random variable $\omega_t=\ln S(t+1)-\ln S(t)$.
However, I am really interested in the derivation of fractional price dynamics (FSDE)
$$dS(t)=\mu S(t)dt+\sigma S(t)dB_H(t),$$
incorporating the long range price dependence (for example that I felt, the gold price hike (in Indian market) at Covid-$19$ pandemic times is still prevalent). What is the empirical role of the fractional Brownian motion (FBM), $B_H(t)$ with the Hurst parameter $H \in (0,1)$? I couldn't capture financial functionality of these parameters in long range time dependence of asset price.
In contrast, I want to understand how mathematically incorporating the long-range price dependence in respective SDEs?
The same obscurity lies in me for the case of stochastic delay differential equation (SDDE, feedback/ memory effect)
$$dS(t)=\mu(S(t-\tau))S(t)dt+\sigma(S(t-\tau)) S(t)dB_H(t),$$
where, the drift $\mu$ and volatility $\sigma$ can be regarded
as a function of the past state.
|
On the operational process of fractional and delay Brownian motions (FGBM/GDBM) governing respective market scenarios
|
CC BY-SA 4.0
| null |
2023-03-27T08:17:09.813
|
2023-05-27T17:32:16.503
|
2023-05-27T17:32:16.503
|
848
|
60305
|
[
"stochastic-processes",
"geometric-brownian",
"pde"
] |
75034
|
2
| null |
75028
|
3
| null |
“How is it ever optimal to throw away positive time value?”
Answer: when the thing you are throwing away is less than what you are gaining through the early exercise . What is that ? Interest you receive on the strike price by receiving it earlier.
| null |
CC BY-SA 4.0
| null |
2023-03-27T09:05:29.887
|
2023-03-27T09:05:29.887
| null | null |
18388
| null |
75036
|
2
| null |
21519
|
0
| null |
It doesn't make much sense to me to use a BEKK if you were only looking at one time series. From my understanding you would use BEKK's for Multivariate (G)ARCH models. In this case you can use `mgarch dvech` or `mgarch dcc`. `dcc` stands for dynamic conditional correlation. Both of these modeling methods allow for changing conditional correlations.
see `help mgarch`
| null |
CC BY-SA 4.0
| null |
2023-03-27T15:31:13.470
|
2023-03-27T15:31:13.470
| null | null |
66814
| null |
75037
|
1
| null | null |
1
|
183
|
I want to bootstrap the yield curve of multiple currencies using 3M futures. I have it implemented in Matlab, but need to transfer to Python. I have a vector of dates and a vector of future prices.
Could somebody help me finding the python equivalent to Matlab's:
`IRDataCurve.bootstrap('Zero', quote_date, instrument_types, instruments,'InterpMethod','spline','Basis',2,'Compounding',-1,'IRBootstrapOptions', IRBootstrapOptions('ConvexityAdjustment',@(t) .5*sigma^2.*t.^2));`
Is it possible to replicate?
I found some options with QuantLib:
- PiecewiseLogLinearDiscount
- PiecewiseLogCubicDiscount
- PiecewiseLinearZero
- PiecewiseCubicZero
- PiecewiseLinearForward
- PiecewiseSplineCubicDiscount
But there is very little documentation.
Help would be very much appreciated
|
Python yield curve bootstrapping equivalent to Matlab IRDataCurve.bootstrap
|
CC BY-SA 4.0
| null |
2023-03-27T16:55:32.510
|
2023-04-28T21:08:21.007
|
2023-03-29T08:21:25.917
|
66815
|
66815
|
[
"programming",
"quantlib",
"bootstrapping",
"interpolation"
] |
75039
|
1
| null | null |
0
|
81
|
I'm interested in gaining a better understanding of order cancellation techniques in high-frequency market making. What are the different approaches to order cancellation that are commonly used in HFT market making, and how do they impact trading performance? Are there any best practices or industry-standard techniques for order cancellation in this context?
I'd also like to know more about how order cancellation strategies can be used to improve market making performance. Additionally, I'm curious about how to optimize order cancellation strategies for specific market conditions and trading goals.
Any resources, papers, or examples that you can provide to help me deepen my understanding of this topic would be greatly appreciated.
Thank you in advance for your help!
|
Exploring order cancellation techniques in high-frequency market making
|
CC BY-SA 4.0
| null |
2023-03-27T19:32:52.273
|
2023-03-27T19:32:52.273
| null | null |
66757
|
[
"high-frequency",
"market-making"
] |
75040
|
1
| null | null |
2
|
85
|
I would like to derive the exact delta-hedging strategy in the Black-Scholes market to replicate the following non-standard endogenous payoff. The particularity is that the payoff does not only depend on the final value of the stock but on the yearly returns of the hedge porfolio.
More specifically, it is given by
$$
L_T= \prod_{t=1}^{T} (1+g+\max [R_t-g,0])
$$
where $g$ is a constant guaranteed return rate and $R_t=\frac{\Pi_{t+1}-\Pi_{t}}{\Pi_{t}}$ is the yearly returns of the hedging portfolio.
So, contrary to the Black-Scholes call option where the payoff is fixed and there is a dynamic portfolio of stocks and bonds, here the hedging portfolio serves both as the underlying security and the replicating portfolio.
In this case, I don't think there is a closed-form for the price but I would like to still find the perfect replicating strategy for this payoff. Any help or reference is welcome !
|
Exact delta-hedging for endogenous payoffs
|
CC BY-SA 4.0
| null |
2023-03-27T19:43:32.760
|
2023-03-27T19:43:32.760
| null | null |
27358
|
[
"option-pricing",
"black-scholes",
"hedging",
"greeks",
"delta-hedging"
] |
75041
|
1
| null | null |
1
|
78
|
I did a fama-macbeth regression and got alpha of over 2.3%. Dependent variable is monthly returns and independent variables are ln(size) and ln(book-to-market). I was just wondering what does the intercept measure?
|
What does Fama-Macbeth regression model alpha (intercept) measure?
|
CC BY-SA 4.0
| null |
2023-03-27T23:12:52.490
|
2023-05-31T18:16:22.487
|
2023-05-31T18:16:22.487
|
848
|
66358
|
[
"fama-french"
] |
75043
|
1
| null | null |
0
|
68
|
I know it's possible to [efficiently estimate bid/ask spreads from OHLC prices](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3892335) and there are several methods to do so. However I have a data source that contains only bids, asks and close prices, no High, Low or Open prices. Is it possible to do the inverse, estimate the Open, High and Low prices from bids, asks and close prices?
|
Estimate Open, High and Low prices from bid, ask and close prices
|
CC BY-SA 4.0
| null |
2023-03-28T06:42:23.047
|
2023-03-28T11:19:24.160
| null | null |
1764
|
[
"time-series",
"estimation",
"bid",
"ask",
"ohlc"
] |
75044
|
1
| null | null |
0
|
20
|
is there a way to get options historical data for free or for a cheap price, let's say 2 years at least for both EOD data and hourly ones for just one underlying instrument such as sp500? Or alternatively getting an options trading account with a broker, is it possible to download historical data from that account? Even with APIs in Python, not necessarily with a provided trading platform. Thanks.
UPDATE
the post [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) is more general because it is not focused on options data and even more it is highly outdated (2011); I have opened 4 "interesting" links and they are all broken, the relative services do not exist anymore. So this question could be a good way for updating and circumscribe the issue.
|
Free historical data for options
|
CC BY-SA 4.0
| null |
2023-03-28T08:34:02.643
|
2023-03-29T11:06:59.857
|
2023-03-29T11:06:59.857
|
64782
|
64782
|
[
"options",
"programming",
"historical-data",
"broker",
"sp500"
] |
75045
|
2
| null |
75043
|
2
| null |
To my knowledge it is not possible to precisely recover the prices. Specifically in your setup, it is not possible to do this for the Open; as recording systems usually measure the Open and the Close as the first, respectively the last traded price, no matter the bid or the ask.
As for approximate estimates, it seems sensible to take an average for the Open and to take the highest bid as the High and the lowest offer as the Low.
| null |
CC BY-SA 4.0
| null |
2023-03-28T09:29:35.513
|
2023-03-28T11:19:24.160
|
2023-03-28T11:19:24.160
|
5656
|
5656
| null |
75046
|
1
| null | null |
2
|
204
|
I'm doing some statistics in order to evaluate the Forex market profitability.
I first define what I call "regeneration of the market".
For example, the following fictive order-book of EURUSD:
```
ASK:
20M 1.10010
8M 1.10005
2M 1.10002
BID:
1M 1.99999
9M 1.99995
15M 1.99990
Spread for 10M: 0.00010
```
If I open a BUY position of 10 millions, the order-book will be the following:
```
ASK:
20M 1.10010
BID:
1M 1.99999
9M 1.99995
15M 1.99990
Spread for 10M: 0.00015
```
I call the market "regenerated", when there is again a spread of 0.00010 for a 10M position.
The spread of 0.00010 is fictive, I don't know what is the average spread for 10M positions.
So, just after having opened a position, how much time a trader will have to wait before there will be again the average spread for a 10M position ?
I know it is hard to give a precise answer, but I would like to have an approximation or an average.
|
How fast is the forex market regenerated?
|
CC BY-SA 4.0
| null |
2023-03-28T10:26:30.640
|
2023-03-28T11:22:54.763
|
2023-03-28T11:22:54.763
|
65724
|
65724
|
[
"fx",
"spread"
] |
75047
|
1
| null | null |
2
|
194
|
I've come across the notes of the 2003 lecture "[Advanced Lecture on Mathematical Science and Information Science I: Optimization in Finance](https://www.ie.bilkent.edu.tr/%7Emustafap/courses/OIF.pdf)" by Reha H. Tutuncu.
It describes on page 62 in section 5.2 a way to reformulate the tangency portfolio to the efficient frontier as a quadratic optimization problem:
$$
\min_{y,\kappa} y^T Q y \qquad \text{where} \quad (\mu-r_f)^T y = 1,\; \kappa > 0
$$
I'm wondering if anyone has seen an adaptation or similar work to incorporate a factor model. I believe an adaptation for the $y$ vector will need to take place but I'm unsure what that would be.
|
How to Maximize Portfolio Sharpe Ratio using Lagrange Multipliers in a Factor Model
|
CC BY-SA 4.0
| null |
2023-03-28T13:28:24.930
|
2023-05-30T12:58:46.570
|
2023-03-29T07:55:50.750
|
47484
|
55044
|
[
"portfolio-optimization",
"modern-portfolio-theory",
"factor-models",
"sharpe-ratio"
] |
75048
|
2
| null |
72210
|
1
| null |
I'm also in the process of learning and will just give you my thoughts here. Might not be fully right so please correct me or enter a discussion with me.
I think it depends on the information you are given. If you know that the other party knows the true value and you have only an estimate, there is information in their trading behaviour. If he buys from you despite a decrease in current EV, you can deduce that the other values are probably higher and raise your quotes. If, on the other hand, you are both unaware of the other numbers, don't let their trading take you too far away from the EV. To me, it does not make sense to quote bids far above the EV, even not to lose inventory right? At one point you will have to think: no further than this. At his point you can set a high ask and place a bid with an unlimited amount of lots on a value equal to or below your average position. If you have to get rid of inventory, you might have to go above this at one point.
Furthermore, they might be bluffing and trying to lure you out. If they keep lifting you and you have to keep a maximum spread of 5 and you are not paying attention to the amount of lots you offer, the following could happen:
- 43@48 Your ask is lifted for 1 lot
- 48@53 Your ask is lifted for 1 lot
- 53@58 Your ask is lifted for 1 lot
You are now short 3 lots with average value 53
- 58@66 Your bid is lifted for 3 lots
You now no longer have a position and made a loss of 15.
| null |
CC BY-SA 4.0
| null |
2023-03-28T14:37:15.477
|
2023-03-28T14:37:15.477
| null | null |
66829
| null |
75050
|
1
|
75051
| null |
0
|
79
|
lets say party A and party B are the two parties in a derivative contract.
If party B posts USD cash as collateral, it would expect party A to return the collateral with interest at settlement. How about if party B posts a US treasury as collateral? would it expect to receive anything on top of the security at settlement ? if so, what is the rational please?
|
posting US treasury as collateral
|
CC BY-SA 4.0
| null |
2023-03-28T19:27:55.090
|
2023-03-28T21:03:59.640
| null | null |
34825
|
[
"discounting",
"collateral",
"csa"
] |
75051
|
2
| null |
75050
|
3
| null |
No, it would not. There is no need for an explicit interest payment when securities are posted. This is because an interest rate is effectively already being applied (the overnight repo rate on the securities). For example, Party B could either (a) post cash and receive interest or (b) borrow a Treasury /invest cash in the repo market, then post the Treasury. The two situations are almost equivalent.
| null |
CC BY-SA 4.0
| null |
2023-03-28T21:03:59.640
|
2023-03-28T21:03:59.640
| null | null |
18388
| null |
75053
|
1
| null | null |
1
|
44
|
I get from the time value of money that receiving \$1 today is worth more than receiving \$1 tomorrow. But what about paying money, as opposed to getting money. Do i prefer to pay \$1 tomorrow than today, because that same \$1 tomorrow would be worth <\$1 today? Or do i prefer benefits and costs today rather than tomorrow?
edit: Assuming no inflation, as to not complicate matters. No uncertainty at all about future prices.
|
Is it better to pay \$1 today or \$1 tomorrow?
|
CC BY-SA 4.0
| null |
2023-03-29T01:49:34.993
|
2023-03-29T01:55:34.680
|
2023-03-29T01:55:34.680
|
66836
|
66836
|
[
"interest-rates"
] |
75054
|
2
| null |
74838
|
2
| null |
Take a look at line 64. C++ takes five inputs, so BlackVolTermStructure must be added.
[https://rkapl123.github.io/QLAnnotatedSource/d0/da0/blackscholesprocess_8hpp_source.html](https://rkapl123.github.io/QLAnnotatedSource/d0/da0/blackscholesprocess_8hpp_source.html)
| null |
CC BY-SA 4.0
| null |
2023-03-29T03:26:53.027
|
2023-03-29T03:26:53.027
| null | null |
42924
| null |
75055
|
1
| null | null |
0
|
51
|
I need to bootstrap a yieldcurve with 3M futures, using a cubic spline if possible.
Using, for example 3M Euribor, how do I bootstrap the yield curve using python?
I have a vector of dates and a vector of future prices.
I found about the QuantLib library, and more specifically, the `ql.PiecewiseCubicZero`. However there is a big lack of documentation on how to use it, and even more, with futures.
Any help would be much appreciated!
|
QuantLib: How to bootstrap Yield Curve using 3M futures - Python
|
CC BY-SA 4.0
| null |
2023-03-29T08:19:34.943
|
2023-03-29T08:19:34.943
| null | null |
66815
|
[
"quantlib",
"finance-mathematics",
"yield-curve",
"bootstrapping",
"interpolation"
] |
75057
|
1
| null | null |
1
|
62
|
Why does everyone say $\rho$ is correlation in Surface SVI?
$w = \frac{\theta_t}{2}(1+\rho\psi(\theta_t)k + \sqrt{(\psi(\theta_t)k+\rho)^2+1-\rho^2})$, with $\rho \in [-1,1]$
[This paper](https://arxiv.org/pdf/2204.00312.pdf) says it is proportional to the slope of smile at the ATM point.
[This paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2971502) says it is the correlation between the stock price
and its instantaneous volatility. What is the instantaneous volatility? I am confused because I am using surface SVI to model the implied volatility instead of stochastic volatility.
|
what does correlation $\rho$ means in surface SVI?
|
CC BY-SA 4.0
| null |
2023-03-29T10:03:50.007
|
2023-03-29T10:03:50.007
| null | null |
33650
|
[
"volatility",
"implied-volatility",
"volatility-smile",
"volatility-surface"
] |
75058
|
1
|
75060
| null |
3
|
168
|
In Thierry Roncalli's book Introduction to Risk Parity and Budgeting (2013), he gives an example of particular solutions to the Risk Budgeting portfolio such as for the $n=2$ asset case.
The risk contributions are:
$$
\frac{1}{\sigma(x)}
\cdot
\begin{bmatrix}
w^2\sigma_1^2 + \rho w(1-w) \sigma_1 \sigma_2 \\
(1-w)^2\sigma_2^2 + \rho w(1-w) \sigma_1 \sigma_2 \\
\end{bmatrix}
$$
The vector $[b,1-b]$ are the risk budgets.
He presents the optimal weight $w$ as:
$$
w^*
=
\frac
{(b - \frac{1}{2}) \rho \sigma_1\sigma_2 - b\sigma_2^2 +\sigma_1\sigma_2\sqrt{(b - \frac{1}{2})^2\rho^2 + b(1-b)}}
{(1-b)\sigma_1^2 - b\sigma_2^2 + 2(b - \frac{1}{2})\rho\sigma_1\sigma_2}
$$
How are these weights derived ? I don't need a full derivation (it would be helpful though), I just don't know how it is derived.
Is it done by setting the risk contributions equal to the budgets?
$$
\begin{bmatrix}
b \\
1-b \\
\end{bmatrix}
=
\frac{1}{\sigma(x)}
\cdot
\begin{bmatrix}
w^2\sigma_1^2 + \rho w(1-w) \sigma_1 \sigma_2 \\
(1-w)^2\sigma_2^2 + \rho w(1-w) \sigma_1 \sigma_2 \\
\end{bmatrix}
$$
|
Derivation of optimal portfolio weights using Risk Budgeting approach
|
CC BY-SA 4.0
| null |
2023-03-29T11:29:40.670
|
2023-03-30T06:43:14.153
|
2023-03-29T11:49:33.490
|
16148
|
50477
|
[
"volatility",
"portfolio-optimization",
"risk",
"portfolio",
"modern-portfolio-theory"
] |
75059
|
1
| null | null |
0
|
53
|
I have often wondered what kind of risk restrictions do traders of options in Hedge funds have but have not managed to find any information on this matter. I presume there must be some kind of measure for a Portfolio Manager (PM) to decide whether to approve their trades or not. However, I cannot find any data about what kind of risk measure would be used. Let me give an example. Let us say I want to buy an FX barrier option, taking a directional bet. Would then the PM come to me and say: I think you have too much Delta/Gamma/Vega etc.? To be more specific, what is the relationship between risks (e.g. Delta/Gamma/Vega, Theta) and the potential return that a typical Hedge Fund or trading desk accept? Is there some literature on this? What kind of option strategies are typically used to minimize risk-adjusted return?
|
Trading options - risk adjusted return
|
CC BY-SA 4.0
| null |
2023-03-29T11:57:03.863
|
2023-03-29T11:57:03.863
| null | null |
44897
|
[
"quant-trading-strategies",
"trading",
"stochastic-volatility",
"greeks",
"option-strategies"
] |
75060
|
2
| null |
75058
|
2
| null |
You are correct in your assumption, this is specified at the start of section 2.2.1 Definition of a risk budgeting portfolio.
>
We consider a set of given risk budgets $\{B_1,\dots,B_n\}$. Here
$B_i$ is an amount of risk measured in dollars. We denote
$\mathcal{RC}_i(x_1,\dots,x_n)$ the risk contribution of asset $i$
with respect to portfolio $x=(x_1,\dots,x_n)$. The risk budgeting
portfolio is then defined by the following constraints:
$$ \mathcal{RC}_1(x_1,\dots,x_2)=B_1 \\ \vdots \\
\mathcal{RC}_i(x_1,\dots,x_2)=B_i \\ \vdots \\
\mathcal{RC}_n(x_1,\dots,x_2)=B_n \\ $$
The two asset case
We can rewrite the two equations into a singular formula by solving for $\sigma(x)$ for both of them and subsequently eliminating $\sigma(x)$ by setting the two formulations of $\sigma(x)$ equal to each other:
$$
\frac{w^2\sigma_1^2+w(1-w)\rho\sigma_1\sigma_2}{b}=
\frac{(1-w)^2\sigma_2^2+w(1-w)\rho\sigma_1\sigma_2}{1-b}
$$
After rearranging this becomes a (complicated) quadratic equation in $w$, which can be solved via [the quadratic formula](https://en.wikipedia.org/wiki/Quadratic_formula). By cancelling terms in the resulting fraction and observing that $0\leq w\leq 1$ (and thus eliminating one of the solutions of the quadratic formula) you should arrive at the optimal $w^*$.
| null |
CC BY-SA 4.0
| null |
2023-03-29T12:00:39.660
|
2023-03-30T06:43:14.153
|
2023-03-30T06:43:14.153
|
5656
|
5656
| null |
75061
|
1
| null | null |
2
|
105
|
I would like to price equity quanto options with the Heston Local-Stochastic Volatility model (LSV) but I am having hard time understanding how to apply quanto adjustment in such complex setup.
When it comes to the pure Heston model, after the reading of the following paper:
[https://www.risk.net/derivatives/2171456/quanto-adjustments-presence-stochastic-volatility](https://www.risk.net/derivatives/2171456/quanto-adjustments-presence-stochastic-volatility)
I have an idea of how to apply the quanto drift adjustment:
- calibrate the Heston Heston parameters to Vanillas
- adjust the drift in the price process $S_t$ with the product of Equity/FX correlation, FX volatility and EQ volatility $(\rho_{S,FX}\sigma_S\sigma_{FX})$
- adjust the drift in the volatility process $\nu_t$ by adjusting the long term mean $\theta$
However, in the LSV model there is local volatility entering the price process, which makes things even more complex and complicated. Unfortunately, I cannot find any resource about adjusting the LSV model for quuanto exotic options.
How one would approach the calibration of the LSV model for quanto options?
|
Pricing Quantos with Local-Stochastic Volatility model
|
CC BY-SA 4.0
| null |
2023-03-29T12:34:28.147
|
2023-03-31T11:32:08.440
|
2023-03-31T11:32:08.440
|
66693
|
66693
|
[
"option-pricing",
"stochastic-volatility",
"local-volatility",
"quanto"
] |
75062
|
2
| null |
75061
|
1
| null |
You will need two LSV models. The first LSV model is calibrated to vanilla and potential exotics equity derivatives. With this you will be able to price any kind of vanilla and exotic equity derivatives under the domestic measure. But for quanto vanilla or exotic options you need the ability to price in a so-called foreign measure. This is why you will need a second LSV model modelling the FX rate. This second LSV model also has to be calibrated to the vanilla and also potentially to various exotic FX options. In addition to this, there will be a (potentially stochastic) correlation between the two spot processes. This is usually calibrated based on historical data, but if you have reliable data for various quanto vanillas, you can try to calibrate it and use it for example in Monte Carlo.
In conclusion, I would say this is not an easy exercise. It involves a lot of market calibration (two stochastic volatility processes and two leverage functions) and either the historical estimation or the calibration of the fifth stochastic process: a correlation between the two. Only after this will you be able to produce reliable equity quanto prices.
| null |
CC BY-SA 4.0
| null |
2023-03-29T14:24:42.453
|
2023-03-29T14:24:42.453
| null | null |
44897
| null |
75063
|
1
| null | null |
0
|
50
|
I have been working on a market neutral pairs trading strategy. See [https://medium.com/@nderground-net/backtesting-a-pairs-trading-strategy-b80919bff497](https://medium.com/@nderground-net/backtesting-a-pairs-trading-strategy-b80919bff497)
I am trying to understand whether I am properly estimating the margin requirement for the S&P 500 pairs.
I have talked to Interactive Brokers support about this question but they were of limited help.
My understanding is that if I have 1000 margin allocated for the long/short pair I can short 2000 of stock (ignoring share issues). This is 50% cash for the short. When the short is executed, the account will be credited with 2000 from the short proceeds.
I then take a long position in 2000 of stock with the proceeds of short.
The end balance (again ignoring share price issues) is a 2000 short position and a 2000 long position and 1000 in margin.
Is this a correct analysis of how a long/short position can be built?
Many thanks,
Ian
|
What is the margin requirement for a dollar neutral long short portfolio
|
CC BY-SA 4.0
| null |
2023-03-29T14:35:14.747
|
2023-03-29T14:35:14.747
| null | null |
63593
|
[
"quant-trading-strategies",
"pairs-trading"
] |
75064
|
2
| null |
65912
|
0
| null |
$\frac{1}{(1+z/2)^2}$ is the correct answer here. Because it is semi-annual you divide the rate by two and multiply the exponent by two for each period.
To answer your follow up question yes you will divide 1.5 by two but, to be clear, this would not be the first cash flow payment in your stream of payments. There would be 0.5 and 1 preceding this cash flow. All periods in semi-annual are divided by two and the exponent is multiplied by two.
For example, the last cash flow of a 30 year 5% fixed bond would be $\frac{1 + {(.05/2)}}{(1+ {z/2})^{2 * 30}}$
To use an explicit example, if you start the period at 0.45 years then the follow period will need to be 0.95 years, 1.45 years, 1.95 years and so on. You need the consistency of equally spaced periods in order for the semi-annual compounding to work.
FYI QuantLib has some built-in libraries that can help with building curves.
Hope this helps.
| null |
CC BY-SA 4.0
| null |
2023-03-29T15:31:46.450
|
2023-03-29T15:31:46.450
| null | null |
62894
| null |
75065
|
2
| null |
75037
|
1
| null |
You can bootstrap a yield curve using the QuantLib library in Python. First, you need to install QuantLib for Python by running:
```
pip install QuantLib-Python
```
This is an example of how to bootstrap a yield curve using QuantLib (Python equivalent code of the Matlab code you've published):
```
import QuantLib as ql
# Set up parameters
quote_date = ql.Date(28, 3, 2023)
sigma = 0.01 # Adjust based on your data
yield_curve_basis = ql.Actual365Fixed()
compounding = ql.Simple
day_count = ql.Actual365Fixed()
# Set up your dates and future prices
dates = [ql.Date(30, 6, 2023), ql.Date(30, 9, 2023), ql.Date(31, 12, 2023)] # Replace with your dates
future_prices = [0.01, 0.015, 0.02] # Replace with your future prices
# Calculate convexity adjustments
time_to_maturities = [(date - quote_date) / 365.0 for date in dates]
convexity_adjustments = [0.5 * sigma ** 2 * t ** 2 for t in time_to_maturities]
# Create instruments
instrument_types = [ql.DepositRateHelper(ql.QuoteHandle(ql.SimpleQuote(price + adjustment)),
ql.Period(3, ql.Months),
2,
ql.TARGET(),
ql.ModifiedFollowing,
False,
day_count) for price, adjustment in zip(future_prices, convexity_adjustments)]
# Bootstrap yield curve
curve = ql.PiecewiseSplineZero(quote_date, instrument_types, yield_curve_basis, compounding)
# Print zero rates
for date, t in zip(dates, time_to_maturities):
zero_rate = curve.zeroRate(t, compounding).rate()
print(f"Zero rate for {date}: {zero_rate * 100:.2f}%")
```
This code sets up the required instruments and bootstraps the yield curve using a cubic spline interpolation method. The `curve` object is a `PiecewiseSplineZero` instance representing the bootstrapped yield curve. You can use this object to obtain zero rates, discount factors, or forward rates as needed.
Please note that you might need to adjust the parameters and input data to match your specific requirements.
| null |
CC BY-SA 4.0
| null |
2023-03-29T20:21:29.070
|
2023-03-29T20:21:29.070
| null | null |
66845
| null |
75066
|
1
| null | null |
1
|
85
|
To discount a cash flow I am told to use interest rate plus credit risk if the cash flow has a credit risk. Why do I not include inflation in the calculation? in 2022 rates went high because the FED was hiking rates and not because the inflation was high. But the value with my 30
year cash flow gets smaller and smaller with inflation prevailing. Why is it not inflation in the discounting:
$$DF=1/(1+r_{\text{free}})\times 1/(1+\text{credit spread})\times 1/(1+\text{inflation rate})$$
|
Discounting with inflation?
|
CC BY-SA 4.0
| null |
2023-03-29T20:47:08.807
|
2023-04-03T07:09:30.103
|
2023-04-03T07:09:30.103
|
5656
|
22823
|
[
"discounting"
] |
75068
|
2
| null |
75027
|
0
| null |
Edit: based on comment.
In your question, you didn't specify the boundary conditions being considered.
```
V[0,:] = S0-K*np.exp(-r*T[0,:])
```
As this condition from the Put-Call parity is to approximate large S, it should be S here instead of the initial price, S0. In this case, it should be
```
V[0,:] = S0*2-K*np.exp(-r*T[0,:])
```
as per your maximum defined S.
| null |
CC BY-SA 4.0
| null |
2023-03-29T21:41:55.903
|
2023-04-19T02:04:07.713
|
2023-04-19T02:04:07.713
|
44205
|
44205
| null |
75069
|
2
| null |
74451
|
3
| null |
For heat equation, it describes how heat diffuses (usually measured by temperature) through the length of the material and over time.
For Black Scholes, it describes how the value of the option diffuses over time and, instead of through some material, as the underlying "travels" across its range of possible values.
| null |
CC BY-SA 4.0
| null |
2023-03-29T22:20:36.737
|
2023-03-29T22:20:36.737
| null | null |
44205
| null |
75071
|
2
| null |
75066
|
3
| null |
The reason you do not have to discount for the inflation rate, is that the risk free rate is set, in part, by the inflation rate. That is, during times of high inflation you would expect the risk free rate of money to be high.
The risk free interest rate is normally specified per year. As such, you may need to adjust your formula for DF.
| null |
CC BY-SA 4.0
| null |
2023-03-30T02:56:53.040
|
2023-03-30T02:56:53.040
| null | null |
18353
| null |
75072
|
1
|
75080
| null |
2
|
177
|
I am trying to find some resources online on how to perform mbs benchmarking, but I was unable to find anything.
Could someone explain how to perform mbs benchmarking and provide some resources?
|
MBS benchmarking
|
CC BY-SA 4.0
| null |
2023-03-30T07:03:31.483
|
2023-03-30T16:57:48.877
| null | null |
65521
|
[
"mbs",
"securitization"
] |
75073
|
1
|
75074
| null |
0
|
64
|
I am currently looking at a way to run a basic sentiment analysis NLP model and look for a possible relationship with firm CDS spreads. I am however unsure as to which spread to use for my regression (par-spread,convspreard).
Especially since both parspread and convspread seem to be quoted as follows i.e 12bps =0.00012
Thank you in advance
|
Markit CDS price data
|
CC BY-SA 4.0
| null |
2023-03-30T09:54:07.027
|
2023-03-30T11:40:41.077
| null | null |
66843
|
[
"cds"
] |
75074
|
2
| null |
75073
|
2
| null |
Related recent paper: [https://www.spglobal.com/marketintelligence/en/news-insights/research/watch-your-language-executives-remarks-on-earnings-calls-impact-cds-spreads](https://www.spglobal.com/marketintelligence/en/news-insights/research/watch-your-language-executives-remarks-on-earnings-calls-impact-cds-spreads)
Use conventional spreads. Par spreads are legacy, going back to before the Big Bang. No one should be using them.
| null |
CC BY-SA 4.0
| null |
2023-03-30T11:40:41.077
|
2023-03-30T11:40:41.077
| null | null |
36636
| null |
75078
|
2
| null |
75072
|
2
| null |
The question is lacking context, but I will try to give an overview:
- You would first need an index to benchmark to, common MBS benchmark indices include the Bloomberg Barclays U.S. MBS Index, the ICE BofA U.S. Mortgage-Backed Securities Index, and the FTSE MBS Index.
- Then you take the data that you have on your MBS portfolio and use it for the subsequent comparison. Make sure that you understand the quality of your data, that it is up-to-date and that you have enough of it.
- Calculate performance metrics for both your portfolio and the benchmark. Common metrics include the annualized return, the standard deviation, the Sharpe Ratio, the duration, the prepayment speed, the weighted average loan age the OAS, the effective duration and price convexity.
Relevant resources include:
- MBS Valuation and Risk Management Under SOFR
- Benchmarking the Canadian Mortgage-Backed Securities market
| null |
CC BY-SA 4.0
| null |
2023-03-30T15:38:07.250
|
2023-03-30T15:38:07.250
| null | null |
5656
| null |
75079
|
2
| null |
74992
|
5
| null |
1) have I applied Heaviside correctly?
I'd say yes, as the result matches with the final difference calculation. Although, I'm puzzled why you use so overcomplicated smoothing for Heaviside function. In principle, any smooth approximation should work. I get the same result with
```
def HeavisideApprox(x, epsilon):
return 0.5 + 0.5 * np.tanh(100*x)
```
For more possible choices, see for example
[https://github.com/norse/norse/blob/main/norse/torch/functional/threshold.py](https://github.com/norse/norse/blob/main/norse/torch/functional/threshold.py)
2) Is the difference in Gammas between AAD and FD acceptable?
This very much depends on the use case. In general, AAD numbers should be more accurate than FD on top of Monte-Carlo.
3) I had to multiply the AAD Gammas by the weights to match FD, and I'm not sure if any other Greeks need that done as well (2nd order maybe).
It looks like `egrad(egrad(MC_basket, (0)))` returns a sum of second order derivatives:
```
Help on function elementwise_grad in module autograd.wrap_util:
elementwise_grad(fun, argnum=0, *nary_op_args, **nary_op_kwargs)
Returns a function that computes the sum of each column of the Jacobian of
`fun`, in one pass. If the Jacobian is diagonal, then this is the diagonal
of the Jacobian.
```
Hence, `gamma[0]` contains
$$ \frac{d^2 V}{d F_1^2} + \frac{d^2 V}{d F_1 d F_2} + \frac{d^2 V}{d F_1 d F_3} $$
After multiplying by `weights[0]` it contains what you need: $\frac{d^2 V}{d F_1^2}$, since deltas are proportional to weights:
$$ \frac{1}{w_1} \frac{d V}{d F_1} = \frac{1}{w_2} \frac{d V}{d F_2} = \frac{1}{w_3} \frac{d V}{d F_3} $$
To avoid explicitly multiplying by `weights`, you may consider to use
```
gamma = hessian(MC_basket, 0)(F0, vols, K, T, IR, corr, weights, trials, steps).diagonal()
```
Extra Comments
- Be careful with 1/2, this results to 0 in Python and most other languages. Instead, prefer 1./2 or 0.5 to stay on the safe side.
- In Python (0) results in 0. A one-element tuple is defined as (0,).
| null |
CC BY-SA 4.0
| null |
2023-03-30T15:59:25.943
|
2023-03-30T16:13:25.220
|
2023-03-30T16:13:25.220
|
66505
|
66505
| null |
75080
|
2
| null |
75072
|
5
| null |
Despite being written more than twenty years ago, you can get a solid introduction to many of the issues that come up in MBS Benchmarking in "Gaining Exposure to Mortgage Benchmarks - A Guide for Global Investors" by Kulason et al (available online).
For more recent work, see Chapters 19-21 in Dynkin et al. [Quantitative Management of Bond Portfolios](https://press.princeton.edu/books/paperback/9780691202778/quantitative-management-of-bond-portfolios) and Chapter 30 in Fabozzi's [The Handbook of Mortgage-backed Securities](https://academic.oup.com/book/7943).
| null |
CC BY-SA 4.0
| null |
2023-03-30T16:53:15.490
|
2023-03-30T16:57:48.877
|
2023-03-30T16:57:48.877
|
16148
|
50071
| null |
75081
|
1
| null | null |
2
|
148
|
Where can you find ESZ22 (CME S&P 500 E-Mini Futures) prices going back multiple years? Ideally, I'm looking for bars every hour or more.
The same for CBOE VIX Futures.
|
Intraday Historical Data: CME S&P 500 E-Mini Futures
|
CC BY-SA 4.0
| null |
2023-03-30T17:47:58.323
|
2023-03-31T01:42:54.557
| null | null |
66857
|
[
"market-data",
"historical-data"
] |
75083
|
2
| null |
75081
|
4
| null |
A few options:
- CME DataMine
- Databento
- Cboe Data Shop
| null |
CC BY-SA 4.0
| null |
2023-03-31T01:42:54.557
|
2023-03-31T01:42:54.557
| null | null |
4660
| null |
75084
|
1
| null | null |
1
|
43
|
>
It is common in commodities markets to hold many positions, both long
and short, across a range of contract months beginning in the prompt
month to five or more years out. [My question is:] What is the
industry standard model for condensing a strip of forward contracts
into a single exposure number "FME" such that it is reasonable to
approximate PNL by taking FME*spot price. (presume you are given a
corr/cov matrix)?
Source: [Methods for "prompt month equivalent" exposure in commodities forwards/futures markets](https://quant.stackexchange.com/questions/14467/methods-for-prompt-month-equivalent-exposure-in-commodities-forwards-futures-m/14472#14472?newreg=39caead007924ac0b91ba41117dfef8f) Quantfinance
I've explored the above post and the below article regarding the topic, however, none seem to be an accurate depiction of the front month hedge ratio in current market conditions. Anyone find anything more recent to solve this? Thanks in advance!
I understand the author's suggestion as multiplying the correlation of log returns by the ratio of implied volatilities of each tenor to the implied volatility of the prompt month contract. To my understanding this is supposed to be the FME value to apply to the delta position in our book to derive the front month hedge. However, the series that comes out of this calculation doesn't seem to fit the fundamentals of the current market. Any help or clarification on this would be greatly appreciated.
[https://www.risk.net/sites/default/files/import_unmanaged/risk.net/data/eprm/pdf/january03/technical.pdf](https://www.risk.net/sites/default/files/import_unmanaged/risk.net/data/eprm/pdf/january03/technical.pdf)
|
Updated Methods for deriving the "front month equivalent" series in commodities derivatives
|
CC BY-SA 4.0
| null |
2023-03-31T12:17:31.240
|
2023-03-31T13:08:56.227
|
2023-03-31T13:08:56.227
|
16148
|
66870
|
[
"hedging",
"commodities"
] |
75087
|
2
| null |
63482
|
2
| null |
Let's assume you are working with 1-dimensional Brownian motion, the instantaneous correlation matrix $\rho$ drops to 1. $C$ and $C'$ both are 1.
Now, referring to Proposition 2.3.1, in particular with $S_t = B(t)$ and $U_t = P(t,T)$, you can write out the two processes as:
$dB(t) = (...)dt$
$dP(t,T) = (...)dt - \sigma B(t,T)P(t,T)dW_t^{T}$,
Note that there's no volatility coefficient in the first equation, so $\sigma_t^B=0$. Also note that the $B(t,T)$ in the second equation is not the bank-account numeraire, but is the $B$ function in Brigo's book.
Now, the drift in $\mathbb{Q}^T$, denoted by $\mu_t^{P}$, can be derived using equation (2.12):
$\mu_t^{P}(r(t))=\mu_t^{B}(r(t))-\sigma\left(\frac{0}{B(t)}-\frac{\sigma B(t,T)P(t,T)}{P(t,T)}\right) = \mu_t^{B}(r(t))+\sigma^2B(t,T)$
Hence the additional drift due to measure change is $\sigma^2B(t,T)$ and then you have equation (3.9).
Apply the same logic to equation (2.13) you'll get:
$dW^T(t)=dW(t)+\sigma B(t,T)dt$
Hope this helps.
| null |
CC BY-SA 4.0
| null |
2023-03-31T15:41:59.293
|
2023-04-30T18:06:06.670
|
2023-04-30T18:06:06.670
|
5656
|
66874
| null |
75088
|
1
| null | null |
2
|
204
|
Suppose that $(\Omega,\mathcal{F},\mathbb{P})$ is a standard probability space and $Z_t=(Z_t^1,Z_t^2)$ is a two dimensional Brownian motion with the filtration $\mathcal{F}^Z_{t}$ and $Z_t^1$, $Z_t^2$ are correlated with $\rho\in(-1,1)$. Let $\mathcal{E}_t$ the endowment of a trader with the following dynamics
$$d\mathcal{E}_t=\mu_1 dt + \sigma_1 dZ^1_t$$
The utility payoff of the investor is concave and increasing with respect to her demand $\alpha_t$ at any $t\in [0, T]$ and to keep simple the problem we assume that $\forall t\in [0,T]$ the trader does not have any budget constraint and can reallocate her demand with respect to a risky asset $V$, that is used to hedge her endowment $\mathcal{E}_t$, with the following dynamics
$$dV_t = \mu_2 dt + \sigma_2 dZ^2_t $$
the position of the trader is
$$W_t = \mathcal{E}_t+\alpha_t (P_t - V_t)$$
where $P_t$ stands for the price of the risky asset $V$ on date $t$ when the demand of the trader will be $\alpha_t$, and $P_t$ is $F_t^Z$-adapted. The expected utility payoff will be
$$\mathbb{E}(U(\mathcal{E}_{t}+\alpha_{t} (P_{t} - V_{t}))|F_0^Z)$$
At any $t\in [0,T]$ there exist also noise traders who who have random, price-inelastic demands and we denote by $B_t$ their cumulative orders at any $t$ and let's assume that $B$ is also an $\mathcal{F}^Z_{t}$-adapted Brownian motion that is independent of $Z=(Z^1,Z^2)$. Hence the total order is
$$Y_t = B_t + \alpha_t$$
where $\alpha$ is the demand of the trader and the price dynamics are
$$P_{t} = \mathbb{E}(V_{t}|Y_{t})$$
where (I believe that) the price dynamics resemble the market efficiency hypothesis as in Kyle 1985. Note here that the trader knows $P_t$ when she trades and that the market maker cannot designate the order $\alpha$ from $B$ and hence she can not know the exactly know the order of the trader.
How could someone solve for the optimal demand $\alpha_t^*$ such that
$$\alpha^*=\operatorname{argmax}\{\mathbb{E}(U(\mathcal{E}_{T}+\alpha_{T} (P_{T} - V_{T}))|F_0^Z)\}$$ when $(t,\alpha_t,V_t) \in \mathbb{R}_{+}\times\mathbb{R}\times\mathbb{R}$?
$\mathbb{Remark:}$ In contrast to the classic model of Kyle, there is an endowment $\mathcal{E}$ that refers to the hedging needs of the trader as well. More precisely, the trader will give an order $a^*$ not only because of her informational advantage (which she wants to exploit) on the dynamics of $V$, but also based on her hedging needs, which would make more difficult for the market maker to elicit the (private) information that the trader holds about $V$ at any $t\in[0,T]$ even in the case where no noise traders exist.
|
How do your solve for trader's optimal demand in market similar to Kyle's model?
|
CC BY-SA 4.0
| null |
2023-03-31T16:16:32.110
|
2023-04-04T07:39:20.153
|
2023-04-04T07:39:20.153
|
23886
|
63646
|
[
"brownian-motion",
"correlation",
"optimization",
"utility-theory",
"stochastic-control"
] |
75089
|
1
| null | null |
2
|
58
|
The butterfly spread satisfies the inequality
c(X1) - 2c(X2) + c(X3) >= 0
Where call strikes satisfy X1<X2<X3 and X2 - X1 = X3 - X2.
There is a “proof” that was provided here [https://quant.stackexchange.com/a/32609/66878](https://quant.stackexchange.com/a/32609/66878). However the proof is missing many steps and serves more as an outline.
Could anyone elaborate on the proof or provide their own in full detail?
Thanks so much,
Jordan
|
Proving that the return from the butterfly spread is nonnegative
|
CC BY-SA 4.0
| null |
2023-04-01T06:09:39.977
|
2023-04-01T06:09:39.977
| null | null |
66878
|
[
"finance-mathematics"
] |
75090
|
1
|
75100
| null |
1
|
98
|
I am trying to calculate the zero rate for a piecewise linear zero curve. I have the following deposit on the short end
- STIBOR 1D, is identified as a tomorrow next deposit: 0.02416
- STIBOR 3 Month: 0.02701
I then use the very nice package QuantLib to find the continuous zero rates:
```
from datetime import datetime, date, timedelta
import pandas as pd
date_today = datetime(2022,12,30)
# Set the date today
ql_date_today = ql.Date(date_today.strftime("%Y-%m-%d"), "%Y-%m-%d") #
ql.Settings.instance().evaluationDate = ql_date_today
helpers = []
depositRates = [0.02416, 0.02701]
depositMaturities = ['1D', '3M']
calendar = ql.Sweden()
fixingDays = 2
endOfMonth = False
convention = ql.ModifiedFollowing
dayCounter = ql.Actual360()
for r,m in zip(depositRates, depositMaturities):
if m == '1D':
fixingDays = 1
convention = ql.Following
elif m == '3M':
convention = ql.Following
fixingDays = 2
helpers.append(ql.DepositRateHelper(ql.QuoteHandle(ql.SimpleQuote(r)),
ql.Period(m),
fixingDays,
calendar,
convention,
endOfMonth,
dayCounter))
curve1 = ql.PiecewiseLinearZero(0, ql.TARGET(), helpers, ql.Actual365Fixed())
curve1.enableExtrapolation()
def ql_to_datetime(d):
return datetime(d.year(), d.month(), d.dayOfMonth())
def calc_days(maturity, date_now = date_today):
return (maturity-date_now).days
dates, rates = zip(*curve1.nodes())
dates = list(map(ql_to_datetime, dates))
days = list(map(calc_days, dates))
df = pd.DataFrame(dict({"Date": dates, "Rate": rates, "Days" : days}))
df
```
The result from QuantLib is:
| |Date |Rate |Days |
||----|----|----|
|0 |2022-12-30 00:00:00 |0.0244947 |0 |
|1 |2023-01-03 00:00:00 |0.0244947 |4 |
|2 |2023-04-03 00:00:00 |0.027174 |94 |
Now I wish to recreate the values that Quantlib produces, given that the curve is bootstrapped with actual 365. For the first deposit I use the simple rate, $DF = \frac{1}{1+RT}$, to calculate the discount factor (I also find it interesting that the daycount convention that gives the matching result to Quantlib is given by 1/360, when my intuition tells me it should be 4/360 given the maturity date):
$$
DF_1 = \frac{1}{1+0.02416 \cdot \frac{1}{360}} \approx 0.999932893 .
$$
Then the continuous zero rate becomes:
$$
r_1 = -365/1 \cdot \ln (DF_1) \approx 0.02449473.
$$
Moreover, if we continue with the second rate we obtain the following discount factor:
$$
DF_2 = \frac{0.999932893 }{1+0.02701 \cdot \frac{94}{360}} \approx 0.99293014.
$$
At last the continuous zero rate for the second deposit is
$$
r_1 = -365/94 \cdot \ln (DF_2) \approx 0,02754960 .
$$
Thus, the results that I get by calculating the zero rates manually for the second deposit does not really match QuantLib's result so I know I am doing my calculations wrong. I have tried to dig in the c++ source code in Quantlib with no success. I have also tried to change the maturity dates in the calculations but still I have not found a matching value for the deposits. I would be glad for any help or pointers.
|
How to calculate the discount factors for two deposits in an interest rate curve
|
CC BY-SA 4.0
| null |
2023-04-01T07:24:16.670
|
2023-04-02T12:25:16.457
|
2023-04-02T12:25:16.457
|
52555
|
52555
|
[
"programming",
"quantlib",
"yield-curve",
"bootstrapping",
"zero-coupon"
] |
75092
|
1
|
75104
| null |
2
|
71
|
I do not understand why mean levels of the state variables under the risk-neutral measure, $\theta^{\mathbb{Q}}$, in Arbitrage-free Nelson-Siegel is set to zero. It should follow from the following relationship:
The relationship between the factor dynamics under the real-world probability measure $\mathbb{P}$ and the risk-neutral measure $\mathbb{Q}$ is given by
\begin{equation} \label{eq11}
dW_{t}^{\mathbb{Q}}=dW_{t}^{\mathbb{P}}+\Gamma_{t}dt
\end{equation}
where $\Gamma_{t}$ represents the risk premium.
Christensen et al. [$2011$] [article here ](https://www.frbsf.org/wp-content/uploads/sites/4/wp07-20bk.pdf)want to preserve affine dynamics under the both, risk-neutral measure and the real-world probability measure, thus the risk premium parameter must be an affine function of factors:
\begin{equation} \label{eq12}
\Gamma_{t}=\begin{pmatrix}
\gamma_{1}^{0}\\
\gamma_{2}^{0}\\
\gamma_{3}^{0}
\end{pmatrix}+\begin{pmatrix}
\gamma_{1,1}^{1} & \gamma_{1,2}^{1} & \gamma_{1,3}^{1} \\
\gamma_{2,1}^{1} & \gamma_{2,2}^{1} & \gamma_{2,3}^{1}\\
\gamma_{3,1}^{1} & \gamma_{3,2}^{1} & \gamma_{3,3}^{1}
\end{pmatrix}\begin{pmatrix}
X_{t}^{1}\\
X_{t}^{2}\\
X_{t}^{3}
\end{pmatrix}
\end{equation}
Any help?
|
Mean level of the state variables under the risk-neutral measure in Arbitrage-free Nelson Siegel
|
CC BY-SA 4.0
| null |
2023-04-01T19:05:45.543
|
2023-04-03T08:34:08.370
| null | null |
44881
|
[
"stochastic-calculus",
"yield-curve",
"bond-yields",
"no-arbitrage-theory",
"term-structure"
] |
75093
|
1
| null | null |
3
|
237
|
In a high-frequency environment, such as a proprietary trading firm or market making firm, the primary goal of the risk management team would be to limit potential losses, but how is that done in this specific environment?
What kind of risk models are used in a high-frequency context? I have a basic understanding of (quantitative) risk management/models, but not in a setting where data is high-frequency and irregularly spaced. Does this change the modelling substantially?
Furthermore, are the following also major considerations?
- Review code written by traders to check for possible shortcomings.
- Review strategies and models to ensure they are sound.
- Check trade permissions, position limits.
- Check the data in use.
- Check the hardware and software in use.
If they do, could you expand on how so?
Thank you in advance.
|
High-frequency risk management methodologies
|
CC BY-SA 4.0
| null |
2023-04-01T21:30:47.883
|
2023-04-04T07:29:11.460
|
2023-04-02T10:36:51.517
|
50477
|
50477
|
[
"risk",
"trading",
"risk-management",
"algorithmic-trading",
"high-frequency"
] |
75094
|
2
| null |
74901
|
4
| null |
There is a closed-form formula for the probability $\mathbb{P}(\tau = t_i)$.
First, we remind that
$$S_t=S_0\cdot \exp\left(\left(\mu-\frac{1}{2}\sigma^2 \right)t+\sigma W_t \right)
$$
For $i=1$, it's easy that
$$
\begin{align}
\mathbb{P}(\tau = t_1) &= \mathbb{P}(S_{t_1}> K ) \\
&=\mathbb{P}\left(W_{t_1}> \frac{\ln \left(\frac{K}{S_0}\right) -\left(\mu-\frac{1}{2}\sigma^2 \right)t_1}{\sigma} \right)\\
&=\color{red}{\Phi\left( \frac{-\ln \left(\frac{K}{S_0}\right) +\left(\mu-\frac{1}{2}\sigma^2 \right)t_1}{\sigma \sqrt{t_1}} \right)}
\end{align}
$$
where $\Phi(\cdot)$ the probability distribution function of the univariate standard normal distribution $\mathcal{N} (0,1) $.
For $i \ge 2$, we have
$$
\begin{align}
\mathbb{P}(\tau = t_i) &= \mathbb{P}(\bigcap_{0 \leq k \leq i-1} \{S_{t_k}\le K \} \cap \{S_{t_i}> K \} ) \\
&= \mathbb{P}\left(\bigcap_{0 \leq k \leq i-1} \left\{W_{t_k} \le \frac{\ln \left(\frac{K}{S_0}\right) -\left(\mu-\frac{1}{2}\sigma^2 \right)t_k}{\sigma} \right\} \cap \left\{W_{t_i}> \frac{\ln \left(\frac{K}{S_0}\right) -\left(\mu-\frac{1}{2}\sigma^2 \right)t_i}{\sigma} \right\} \right) \tag{1}\\
\end{align}
$$
We notice that the vector $(W_{t_1}, W_{t_2},...,W_{t_i})$ is a $i$-variate normal distribution with zero mean and the covariance matrix $\mathbf{\Sigma} \in \mathbb{R}^{i\times i}$ defined by
$$
\Sigma_{hk} = Cov (W_{t_h},W_{t_k}) = \min \{t_h,t_k\} \qquad \text{for }1\le h,k\le i \tag{2}
$$
By denoting $\Phi_i(\mathbf{L},\mathbf{U};\mathbf{0}_i,\mathbf{\Sigma} )$ the probability distribution function of the $i$-variate normal distribution $\mathcal{N}_i (\mathbf{0}_i,\mathbf{\Sigma}) $ with
- zero mean $\mathbf{0}_i$,
- covariance matrix $\mathbf{\Sigma}$ defined by $(2)$
- from the lower bound $\mathbf{L}$ to the upper bound $\mathbf{U}$ with $\mathbf{L}, \mathbf{U} \in \mathbb{R}^{i}$
$$L_k=\begin{cases}
-\infty & \text{if $0\le k\le i-1$ }\\
\frac{\ln \left(\frac{K}{S_0}\right) -\left(\mu-\frac{1}{2}\sigma^2 \right)t_k}{\sigma} & \text{if $k = i$ }\\
\end{cases}
$$
$$U_k=\begin{cases}
\frac{\ln \left(\frac{K}{S_0}\right) -\left(\mu-\frac{1}{2}\sigma^2 \right)t_k}{\sigma} & \text{if $0\le k\le i-1$ }\\
+\infty & \text{if $k = i$ }\\
\end{cases}
$$
Then, from $(1)$, we have
$$\mathbb{P}(\tau = t_i) = \color{red}{\Phi_i(\mathbf{L},\mathbf{U};\mathbf{0}_i,\mathbf{\Sigma} ) }$$
| null |
CC BY-SA 4.0
| null |
2023-04-01T22:14:16.717
|
2023-04-02T10:18:34.487
|
2023-04-02T10:18:34.487
|
24336
|
24336
| null |
75095
|
2
| null |
70065
|
1
| null |
The correct formula is:
$$ \Gamma_{DV$} = { 1 \over 2 } \Gamma (S * 1 \%)^2 $$
Gamma dollars is the change in the delta dollars for a 1% change in underlying around price S. Depending on what you're trading, you will need to include the contract multiplier next to S as only for stocks it's 1:1.
Source: TWS Guide - Report Metrics - Page 1010 of 1742
[https://www.interactivebrokers.com/download/TWSGuide.pdf](https://www.interactivebrokers.com/download/TWSGuide.pdf)
| null |
CC BY-SA 4.0
| null |
2023-04-01T23:53:08.117
|
2023-04-02T00:09:12.650
|
2023-04-02T00:09:12.650
|
17788
|
17788
| null |
75096
|
1
|
75098
| null |
0
|
29
|
I have an Interactive Broker account to trade regular stocks. They have a One Cancel Others order type that allows me to put both a stop loss (SL) and take profit (TP) order on a single position.
Is there any crypto exchange that allow me to do the same on my spot position? I used Kraken but they don't offer such feature.
Thank you for your time.
|
Any crypto exchanges with one-cancel-others order type?
|
CC BY-SA 4.0
| null |
2023-04-02T00:34:41.463
|
2023-04-02T05:41:29.857
| null | null |
17939
|
[
"exchange",
"cryptocurrency"
] |
75098
|
2
| null |
75096
|
1
| null |
Binance offers an OCO order, please have a look at this reference: [https://www.binance.com/en-BH/support/faq/what-is-an-oco-one-cancels-the-other-order-and-how-to-use-it-360032605831](https://www.binance.com/en-BH/support/faq/what-is-an-oco-one-cancels-the-other-order-and-how-to-use-it-360032605831)
>
A One-Cancels-the-Other (OCO) order combines one stop limit order and
one limit order, where if one is fully or partially fulfilled, the
other is canceled. An OCO order on Binance consists of a stop-limit
order and a limit order with the same order quantity. Both orders must
be either buy or sell orders. If you cancel one of the orders, the
entire OCO order pair will be canceled.
| null |
CC BY-SA 4.0
| null |
2023-04-02T05:41:29.857
|
2023-04-02T05:41:29.857
| null | null |
5656
| null |
75099
|
1
|
75101
| null |
1
|
120
|
I was wondering if anyone could help me understand Figure 2 Rama Cont's [Price Impact paper](https://academic.oup.com/jfec/article-abstract/12/1/47/816163)? It is [on arxiv as well](https://arxiv.org/pdf/1011.6402.pdf).
In Figure 2 (screen from arxiv version), they demonstrate how to derive change in midprice and I do not understand why the result is $-\delta$? (-1 tick)?
[](https://i.stack.imgur.com/bBpEl.png)
$\delta \frac{L^{b}-M^{s}}{D}=\delta \frac{7-15}{5}=-\delta 1.6$
I understand the market order depletes one full tick, but they define $D$ as market depth and even tells us it is 5, but clearly they are using 8.
Could anyone point me in the right direction, please?
Thanks!
|
Order Flow Imbalance calculation
|
CC BY-SA 4.0
| null |
2023-04-02T11:05:35.067
|
2023-04-02T20:52:14.860
| null | null |
22237
|
[
"market-microstructure",
"limit-order-book"
] |
75100
|
2
| null |
75090
|
2
| null |
I found the answer after extensive digging in this forum, particularly what gave me the answer was the following post [How does bloomberg calculate the discount rate from EUR estr curve? [closed]](https://quant.stackexchange.com/questions/73522/how-does-bloomberg-calculate-the-discount-rate-from-eur-estr-curve).
Thus, for the second deposit let $T_s$, $T_e$ denote the start and end of the deposit respectively. Then allow $t$ be the time of the you wish to calculate the discount factor. Given the holidays during 2022-12-30 we set $T_s = 4$ and $T_e = 94$. Further we consider $t = 0$. Then the solution we are after is given by:
$$
r_2 = -\frac{365}{T_e-T_s} \cdot \log ( \frac{DF(t,T_s)}{1+R \cdot (T_e - T_s)/360} ).
$$
If we continue with calculating $DF(t, T_s)$ we obtain
$$
DF(t, T_s) = e^{(-r_1 \cdot (T_s -t) / 365)} = e^{(-r_1 \cdot (4 -0) / 365)} \approx 0.9997316.
$$
Then the final result is
$$
r_2 = -\frac{365}{94-4} \cdot \log ( \frac{0.9997316}{1+0.02701 \cdot (94 - 4)/360} ) \approx 0.0271740.
$$
| null |
CC BY-SA 4.0
| null |
2023-04-02T12:22:37.637
|
2023-04-02T12:24:27.583
|
2023-04-02T12:24:27.583
|
52555
|
52555
| null |
75101
|
2
| null |
75099
|
3
| null |
The way I understand this:
- $15$ shares are market sold in figure 1, this corresponds with $15 / 5 = 3$ ticks
- $7$ limit bid orders are added after step 1. This fully replenishes one level and partially replenishes a second one.
After this the best bid is $0 - 3 + 2 = 1$ tick lower than at the start.
I'm not sure why you say they are using 8, this seems to be the delta in shares offered, calculated as $-15 + 7 = 8$.
It seems they use $\lceil x \rceil$ to denote rounding of $x$ towards 0. Rounding towards zero is necessary as the bid or ask level only changes if a level is completely filled or emptied.
| null |
CC BY-SA 4.0
| null |
2023-04-02T14:48:36.187
|
2023-04-02T20:52:14.860
|
2023-04-02T20:52:14.860
|
848
|
848
| null |
75102
|
1
| null | null |
0
|
27
|
I want to compare some short-rate models based on likelihood and AIC. I will use least squares estimates.
Let's take the Vasicek model as an example and its discretized version:
$$
dr = \alpha(\beta-r)dt + \sigma dW
$$
$$
r_{t+1} = \alpha\beta\Delta t + (1-\alpha\Delta t)r_t + \sigma\sqrt{\Delta t} \varepsilon_t
$$
With R, the coefficients of `regression <- lm(rtnext ~ rt)` can be used to estimate $\alpha$ and $\beta$, and the residuals can be used for the 3rd parameter.
My question is as follows:
As the loglikelihood depends only on [RSS](https://en.wikipedia.org/wiki/Akaike_information_criterion#Comparison_with_least_squares), it seems it does not depend on $\sigma$. Can I take $\sigma$ into consideration, or did I miss something?
Note: I used the same implementation in R as [statsmodels](https://github.com/statsmodels./statsmodels/blob/6b66b2713c408483dfeb069212fa57c0ee1e078b/statsmodels/regression/linear_model.py#L1905)
And an additional, more straightforward example in R:
```
N <- 1000
k <- 3
x <- rnorm(N)
e <- rnorm(N)
y <- 0.5 + 2.3 *x + 1.5*e # Can I consider 1.5 as an additional parameter?
reg <- lm(y ~ x)
reg
(RSS <- sum(reg$residuals^2))
nobs <- N/2
(llf <- -nobs * log(RSS) - (1 + log(pi/nobs))*nobs)
logLik(reg)
```
|
Likelihood of least squares estimates of Vasicek model
|
CC BY-SA 4.0
| null |
2023-04-02T17:21:30.440
|
2023-04-02T17:21:30.440
| null | null |
28784
|
[
"vasicek",
"parameter-estimation"
] |
75103
|
1
| null | null |
3
|
112
|
Wikipedia introduces the [Roll Critique mean-variance tautology](https://en.wikipedia.org/wiki/Roll%27s_critique):
Any mean-variance efficient portfolio $R_p$ satisfies the CAPM equation exactly:
$$
E(R_i) = R_f + \beta_{ip}[E(R_p) - R_f]
$$
A portfolio is mean-variance efficient if there is no portfolio that has a higher return and lower risk than those for the efficient portfolio. Mean-variance efficiency of the market portfolio is equivalent to the CAPM equation holding. This statement is a mathematical fact, requiring ''no'' model assumptions."
Does anyone have a simple proof or intuition of this. The mean variance frontier is a parabola with expected return on the left - the tangent line (sharpe ratio) is different for each point in the parabola,if you used any portfolio, you would get a different sharpe ratio.
I know the answer is very near but am not smart enough to see it.
Maybe my question is: in what way does the CAPM represent optimal risk and return - is there a relationship to the Sharpe Ratio?
$$
\beta_{\text{CAPM}}= \mathrm{Cov}(x,m)/\mathrm{Var}(x) \\ \text{Sharpe}=\mathrm{E}(x)/\mathrm{Stdev}(x).
$$
Also it is described as a tautology - for example in the Beta anomaly, Betas do not line up with Returns (too flat), but the Roll Critique wording is very strong that mean variance efficiemcy and CAPM are exactly the same,not approximately.
|
Roll Critique - CAPM and mean variance tautology?
|
CC BY-SA 4.0
| null |
2023-04-02T19:36:59.443
|
2023-04-03T16:09:41.047
|
2023-04-03T16:09:41.047
|
57035
|
57035
|
[
"statistics",
"finance-mathematics",
"regression",
"capm"
] |
75104
|
2
| null |
75092
|
1
| null |
The reason for setting $\theta^{\mathbb{Q}}=0$ is that, along with other restrictions, it identifies the parameters of the model uniquely, which means that the model is then well-defined, and there is then a one-to-one relationship between the model parameters and the probability distribution of the data. If the model is uniquely identified, there is only one set of parameter values that can generate the observed data, given the specified model and vice versa.
This is described in Section 3 of the paper you linked to:
>
Because the latent state variables may rotate without changing the
probability distribution of bond yields, not all parameters in the
above model can be identified. Singleton (2006) imposes identifying
restrictions under the $\mathbb{Q}$-measure.
Setting
- the mean $\theta^{\mathbb{Q}}=0$,
- the volatility matrix $\Sigma$ equal to the identity matrix and
- the mean-reversion matrix $K^{\mathbb{Q}}$ equal to the triangular matrix
makes it possible to identify all the other model parameters uniquely from the data.
| null |
CC BY-SA 4.0
| null |
2023-04-02T19:54:33.740
|
2023-04-03T08:34:08.370
|
2023-04-03T08:34:08.370
|
5656
|
5656
| null |
75105
|
5
| null | null |
0
| null | null |
CC BY-SA 4.0
| null |
2023-04-02T22:14:03.453
|
2023-04-02T22:14:03.453
|
2023-04-02T22:14:03.453
|
-1
|
-1
| null |
|
75106
|
4
| null | null |
0
| null |
An interest rate swap is a financial derivative where two parties exchange interest payments on a specified notional principal over a set period. One party pays a fixed rate, while the other pays a floating rate tied to a reference rate (e.g., LIBOR). These swaps help manage interest rate risk, hedge against rate fluctuations, and enable speculation on future rate changes.
| null |
CC BY-SA 4.0
| null |
2023-04-02T22:14:03.453
|
2023-04-03T03:37:32.357
|
2023-04-03T03:37:32.357
|
5656
|
5656
| null |
75107
|
2
| null |
75088
|
3
| null |
## Generic knowledge about this kind of models
Let me try to get your model close to elements that are known:
- Time continuous Kyle's model is something that is solved in Çetin, Umut, and Albina Danilova. "Markovian Nash equilibrium in financial markets with asymmetric information and related forward–backward systems." (2016): 1996-2029.
Such a model shares a lot of your features:
the informed trader observes the value of an asset $\cal E$
she can trade it to optimise her PnL at $T$
in front of a Market Maker (MM) who does not see the true value of $\cal E$ but observes the flow $\alpha dt$ (same notation as yours!) of the informed trader, the is mixed with noise trades' demand
hence the MM "see" a diffusion with a controlled drift:
$dY = \alpha dt + \sigma dB$.
the MM has a CARA utility function (that is more general than your conditional expectation)
the difference is that the informed trader does not have another instrument $V$ to trade, but this is already complicated
something that you fail to properly define in your model is how the market maker and the informed trader share (or not) information; in their paper, Cetin and Dsnilova have to use a Brownian Bridge.
If you want. to do something like them, they have a great book on this topic: Çetin, Umut, and Albina Danilova. Dynamic Markov Bridges and Market Microstructure: Theory and Applications. Vol. 90. Springer, 2018.
- Optimal execution with hedging is another story and focuses on simultaneously trading $\cal E$ and $V$ to hedge some market risk on the most liquid instrument ($V$ for you). There is this paper that shown how to do it on a portfolio: Lehalle, Charles-Albert. "Rigorous strategic trading: Balanced portfolio and mean-reversion." The Journal of Trading 4, no. 3 (2009): 40-46.
it deals with a full portfolio: the investor trades all thehe components of the portfolio that are correlated and do not have all the same liquidity
but it is done inside an "Almgren-Chriss" framework, that is discrete and not really well adapted to yours (or Cetin-Danilova's one)
My advice would be to use a Cartea-Jaimungal framework and write your optimal trading with an instrument properly. It is not very difficult (I gave it at one of my exams, hence I prefer not to write the full solution there...), but mixing it with Cetin and Danilova may be tricky....
## Start of analysis: the one period model
Let me help you in the context of a one period model.
The mechanism of the proof should be this one
- you need to choose a pricing function for your Market Maker (MM), let me use the notation $f_\theta(\alpha)$, where $\theta$ are the parameters of the pricing function.
If you want to get inspiration you can have a look at Lehalle, Charles-Albert, Eyal Neuman, and Segev Shlomov. "Phase Transitions in Kyle's Model with Market Maker Profit Incentives." arXiv preprint arXiv:2103.04481 (2021) where a neural network is used (and theoretical results provided).
- let me replace the generic CARA function by the cash account of the informed trader, she want to maximise
$$\mathbb{E}((Q_0-\alpha){\cal E} + \alpha (f_\theta(\alpha)-V)).$$
I let you check: she can liquidate her position (of size $Q_0-\alpha$) in the initial security and the value of the remaining position $\alpha$ is clear.
- To maximise this it is enough to find $\alpha^*$ such that
$$f_\theta(\alpha^*)+\alpha^*f_\theta'(\alpha^*)=V-{\cal E}.$$
This is nothing more that the derivative of the upper expression with respect to $\alpha$.
- Now you have a relation between $\alpha^*$ and $(\theta,{\cal E})$ that is of primary importance in this kind of game (this is a kind of Stackelberg game, see Vasal, Deepanshu, and Randall Berry. "Master Equation for Discrete-Time Stackelberg Mean Field Games with a Single Leader." In 2022 IEEE 61st Conference on Decision and Control (CDC), pp. 5529-5535. IEEE, 2022.)
- This relation has to be reinjected in the pricing model $P=\mathbb{E}(V|B+\alpha)$, giving birth to something like
$$P=f_\theta(\alpha^*(\theta,{\cal E}))=\mathbb{E}(V|B+\alpha^*(\theta,{\cal E})).$$
This is the formula in $\theta$ of a regression of $V$ on $\alpha$.
- usually you try to get there a regression of $\cal E$ on $\alpha$ and not of $V$ on it.
## Conclusion and advice
I hope you understand that there is a lot of knowledge and literature on this topic. You should read more about it before attacking your problem: I am sure the reading will tell you how to modify your problem so that it reflects what you really want.
I am not sure that at this stage it is really the case.
| null |
CC BY-SA 4.0
| null |
2023-04-03T06:16:52.883
|
2023-04-04T07:12:10.937
|
2023-04-04T07:12:10.937
|
2299
|
2299
| null |
75108
|
2
| null |
75024
|
2
| null |
Dispersion trading is a way to mitigate correlation risk. The book "Foreign Exchange Option Pricing A Practitioners Guide" (Chapter 10 Multicurrency Options) introduces an analysis framework.
Last, people use the gap to smooth out the barrier regarding digital options. i.e. using call/put spreads to replicate digital payoffs.
| null |
CC BY-SA 4.0
| null |
2023-04-03T08:34:02.653
|
2023-04-03T08:34:02.653
| null | null |
42924
| null |
75109
|
2
| null |
73983
|
1
| null |
This is because QuantLib checks if the model you calibrated has "calendar arbitrage".
[https://github.com/lballabio/QuantLib/issues/1124](https://github.com/lballabio/QuantLib/issues/1124)
| null |
CC BY-SA 4.0
| null |
2023-04-03T10:23:45.190
|
2023-04-03T10:23:45.190
| null | null |
42924
| null |
75110
|
1
|
75114
| null |
2
|
193
|
As the title says, I am looking to see if there is a good approximation for the convexity of a Fixed Income instrument. Say I know all the parameters of the instrument, can the Convexity be written as a function of these?
|
If I know the Price, DV01, and Duration of a Fixed Income instrument, is their approximation for the Convexity?
|
CC BY-SA 4.0
| null |
2023-04-03T11:00:39.087
|
2023-04-03T14:27:12.973
| null | null |
59552
|
[
"fixed-income",
"risk"
] |
75111
|
1
| null | null |
3
|
118
|
Options implied volatility (IV) and skew can be calculated using the current option prices. I ask to what extent those observed variables are expected to deviate from the "expected" or predicted values for those parameters.
Specifically, it is reasonable to assume that the current IV is a function of the current and historical volatility pattern of the underlying asset plus an extra variable representative of the genuine forward-looking expectation of the market participants (i.e., in case a significant release is pending).
My goal is to assess the weight of the "current and historical volatility and skew" in predicting the current IV and skew. In other words: how good is this at predicting the current IV.
|
Predicting Implied Volatility from current and historical volatility
|
CC BY-SA 4.0
| null |
2023-04-03T13:28:17.697
|
2023-04-03T14:07:38.127
| null | null |
22413
|
[
"derivatives"
] |
75113
|
2
| null |
75111
|
3
| null |
One simple approach for IV is to use volatility cones based off of realized volatility. See this PDF for an explanation of the approach [https://www.m-x.ca/f_publications_en/cone_vol_en.pdf](https://www.m-x.ca/f_publications_en/cone_vol_en.pdf) and this blog with some python code and an example [https://quantpy.com.au/black-scholes-model/historical-volatility-cones/](https://quantpy.com.au/black-scholes-model/historical-volatility-cones/)
| null |
CC BY-SA 4.0
| null |
2023-04-03T14:07:38.127
|
2023-04-03T14:07:38.127
| null | null |
1764
| null |
75114
|
2
| null |
75110
|
4
| null |
Let $P$ represent the current price and let $P(-\Delta y)$ and $P(+\Delta y)$ represent the projected prices if the yield curve is shifted in parallel by the amounts $-\Delta y$ and $+\Delta y$ respectively (we need to exercise care with respect to the assumptions under which these projected prices are obtained, for example one typically assumes that spreads remain constant). Then:
\begin{align*}
\mbox{Convexity} &= \frac{1}{P} \frac{d^2P}{dP^2} \\
&\approx \frac{1}{P} \frac{\left(\frac{P(-\Delta y) - P}{\Delta y} - \frac{P - P(+\Delta y)}{\Delta y}\right)}{\Delta y} \\
&= \frac{1}{P} \frac{P(+\Delta y)+P(-\Delta y) - 2P}{(\Delta y)^2}
\end{align*}
You can recycle a version of the argument if you have information on the Durations $D(-\Delta y)$ and $D(+\Delta y)$ for small parallel shifts of the yield curve.
| null |
CC BY-SA 4.0
| null |
2023-04-03T14:27:12.973
|
2023-04-03T14:27:12.973
| null | null |
50071
| null |
75115
|
1
|
75169
| null |
4
|
269
|
There is [a paper](https://lesniewski.us/papers/published/HedgingUnderSABRModel.pdf) by Bruce Bartlett introducing a modified delta for SABR model which accounts for the correlation between forward and volatility processes. The main result of the paper is that if $dF$ is a change in a forward rate $F$ then the average change in a SABR volatility parameter $\alpha$ is $\delta\alpha=\frac{\rho\nu}{F^\beta}dF$.
I took the following code computing Bartlett's delta from the [SABR and SABR LIBOR Market Models in Practice](https://link.springer.com/book/10.1057/9781137378644) book:
```
from scipy.stats import norm
import math
def haganLogNormalApprox (y, expiry , F_0 , alpha_0 , beta , nu , rho ):
'''
Function which returns the Black implied volatility ,
computed using the Hagan et al. lognormal
approximation .
@var y: option strike
@var expiry: option expiry (in years)
@var F_0: forward interest rate
@var alpha_0: SABR Alpha at t=0
@var beta : SABR Beta
@var rho: SABR Rho
@var nu: SABR Nu
'''
one_beta = 1.0 - beta
one_betasqr = one_beta * one_beta
if F_0 != y:
fK = F_0 * y
fK_beta = math .pow(fK , one_beta / 2.0)
log_fK = math .log(F_0 / y)
z = nu / alpha_0 * fK_beta * log_fK
x = math .log (( math .sqrt (1.0 - 2.0 * rho *
z + z * z) + z - rho) / (1 - rho))
sigma_l = (alpha_0 / fK_beta / (1.0 + one_betasqr /
24.0 * log_fK * log_fK +
math .pow( one_beta * log_fK , 4) / 1920.0) *
(z / x))
sigma_exp = ( one_betasqr / 24.0 * alpha_0 * alpha_0 /
fK_beta / fK_beta + 0.25 * rho * beta *
nu * alpha_0 / fK_beta +
(2.0 - 3.0 * rho * rho) / 24.0 * nu * nu)
sigma = sigma_l * ( 1.0 + sigma_exp * expiry)
else:
f_beta = math .pow(F_0 , one_beta)
f_two_beta = math .pow(F_0 , (2.0 - 2.0 * beta ))
sigma = (( alpha_0 / f_beta) * (1.0 +
(( one_betasqr / 24.0) *
( alpha_0 * alpha_0 / f_two_beta ) +
(0.25 * rho * beta * nu * alpha_0 / f_beta) +
(2.0 - 3.0 * rho * rho) /
24.0 * nu * nu) * expiry))
return sigma
def dPlusBlack(F_0 , y, expiry , vol):
'''
Compute the d+ term appearing in the Black formula.
@var F_0: forward rate at time 0
@var y: option strike
@var expiry: option expiry (in years)
@var vol: Black implied volatility
'''
d_plus = ((math.log(F_0 / y) + 0.5 * vol * vol * expiry)
/ vol / math.sqrt(expiry ))
return d_plus
def dMinusBlack(F_0 , y, expiry , vol):
'''
Compute the d- term appearing in the Black formula.
@var F_0: forward rate at time 0
@var y: option strike
@var expiry: option expiry (in years)
@var vol: Black implied volatility
'''
d_minus = (dPlusBlack(F_0 = F_0 , y = y, expiry = expiry ,
vol = vol ) - vol * math.sqrt(expiry ))
return d_minus
def black(F_0 , y, expiry , vol , isCall ):
'''
Compute the Black formula.
@var F_0: forward rate at time 0
@var y: option strike
@var expiry: option expiry (in years)
@var vol: Black implied volatility
@var isCall: True or False
'''
option_value = 0
if expiry * vol == 0.0:
if isCall:
option_value = max(F_0 - y, 0.0)
else:
option_value = max(y - F_0 , 0.0)
else:
d1 = dPlusBlack(F_0 = F_0 , y = y, expiry = expiry ,
vol = vol)
d2 = dMinusBlack(F_0 = F_0 , y = y, expiry = expiry ,
vol = vol)
if isCall:
option_value = (F_0 * norm.cdf(d1) - y *
norm.cdf(d2))
else:
option_value = (y * norm.cdf(-d2) - F_0 *
norm.cdf(-d1))
return option_value
def computeFirstDerivative (v_u_plus_du , v_u_minus_du , du):
'''
Compute the first derivatve of a function using
central difference
@var v_u_plus_du: is the value of the function
computed for a positive bump amount du
@var v_u_minus_du : is the value of the function
computed for a negative bump amount du
@var du: bump amount
'''
first_derivative = (v_u_plus_du - v_u_minus_du ) / (2.0 * du)
return first_derivative
def computeSABRDelta (y, expiry , F_0 , alpha_0 , beta , rho , nu , isCall):
'''
Compute the SABR delta.
@var y: option strike
@var expiry: option expiry (in years)
@var F_0: forward interest rate
@var alpha_0: SABR Alpha at t=0
@var beta : SABR Beta
@var rho: SABR Rho
@var nu: SABR Nu
@var isCall: True or False
'''
small_figure = 0.0001
F_0_plus_h = F_0 + small_figure
avg_alpha = (alpha_0 + (rho * nu /
math .pow(F_0 , beta )) * small_figure )
vol = haganLogNormalApprox (y, expiry , F_0_plus_h , avg_alpha ,
beta , nu , rho)
px_f_plus_h = black(F_0_plus_h , y, expiry , vol , isCall)
F_0_minus_h = F_0 - small_figure
avg_alpha = (alpha_0 + (rho * nu /
math .pow(F_0 , beta )) * (-small_figure ))
vol = haganLogNormalApprox (y, expiry , F_0_minus_h ,
avg_alpha , beta ,
nu , rho)
px_f_minus_h = black(F_0_minus_h , y, expiry , vol , isCall)
sabr_delta = computeFirstDerivative (px_f_plus_h ,px_f_minus_h ,
small_figure )
return sabr_delta
```
The code seems alright however I encountered a problem with wrong signs of deltas for several caplets (call option on a forward rate) and floorlets (put option on a forward rate) while working with SABR model calibrated to real surface. One would expect the delta of a call to be positive and the delta of a put to be negative which is violated in the following case
```
BartlettDeltaPut = computeSABRDelta(y=0.06, expiry=1.50, F_0=0.0962688131761622,
alpha_0=0.0895853076638471, beta=0.5, rho=0.235477576202461, nu=1.99479846430177,
isCall=False)
BartlettDeltaCall = computeSABRDelta(y=0.10, expiry=0.25, F_0=0.07942844548137806,
alpha_0=0.127693338654331, beta=0.5, rho=-0.473149790316068, nu=2.46284420168144,
isCall=True)
```
resulting in
```
0.21186868757223573
-0.0012938212806158644
```
In a contrary, the plain vanilla Black delta given by
```
import numpy as np
def Delta(k, f, t, v, isCall=True):
d1 = (np.log(f/k) + v**2 * t/2) / (v * t**0.5)
if isCall:
delta = norm.cdf(d1)
else:
delta = norm.cdf(d1) - 1
return delta
vol1 = haganLogNormalApprox(y=0.06, expiry=1.50, F_0=0.0962688131761622,
alpha_0=0.0895853076638471, beta=0.5, nu=1.99479846430177, rho=0.235477576202461)
vol2 = haganLogNormalApprox(y=0.10, expiry=0.25, F_0=0.07942844548137806,
alpha_0=0.127693338654331, beta=0.5, nu=2.46284420168144, rho=-0.473149790316068)
BlackDeltaPut = Delta(k=0.06, f=0.0962688131761622, t=1.50, v=vol1, isCall=False)
BlackDeltaCall = Delta(k=0.10, f=0.07942844548137806, t=0.25, v=vol2, isCall=True)
```
coupled with volatility values computed by Hagan et al. approximations from the code above would work just as expected producing negative delta for put and positive delta for call options:
```
-0.16385166669719764
0.1753400660949036
```
Why Bartlett's delta values don't make sense in this case? I looked through the code carefully and to the best of my knowledge it doesn't have any errors or typos.
|
Bartlett's delta gives wrong signs for calls and puts
|
CC BY-SA 4.0
| null |
2023-04-03T15:52:45.633
|
2023-04-10T13:11:17.407
|
2023-04-10T00:25:00.027
|
54838
|
27119
|
[
"option-pricing",
"programming",
"greeks",
"delta",
"sabr"
] |
75116
|
1
| null | null |
0
|
36
|
Regarding European Interbank Money Markets, at the beginning of each month, when the ECB performs LTRO operations, whereby it lends for a 3-month period, shouldn't the 3-m Euribor exactly match the refi rate?
If the Euribor is higher, no bank will have the incentive to borrow on the IBMM.
If the Euribor is lower, no bank will lend there and will prefer to renew lower loan amounts from the ECB to get rid of excess reserves.
What am I not getting?
Similar regarding the ESTER and the Standing Facilities. Shouldn't the ESTER always be between, the Margin Lending Facility and the Deposit Facility? As of 03rd April 2023, it is below 3%.
Thank you.
|
Equality between ECB and IBMM rates
|
CC BY-SA 4.0
| null |
2023-04-03T15:57:24.343
|
2023-04-03T15:57:24.343
| null | null |
66504
|
[
"interest-rates",
"european"
] |
75119
|
2
| null |
75093
|
3
| null |
I haven’t worked with HFT infra but for MFT something along these lines is necessary.
The infrastructure for deploying high frequency trading is very complicated and unique for most firms. There’s no any single solution fitting all cases. However, for simplicity you can categorize risk rules into three groups (made up names): infra risk, model specific risk, and system wide risks. Each of these is handled by relevant teams.
Infrastructure risk: basically, you need a dev team to ensure all the software/hardware is working as expected and have an action plan in case of failure. That includes necessary steps to ensure code quality (code review, testing, CI/CD), handling trading component connection/reconnection (pigs, FIX protocols, data feed connections), throughput control, latency consideration etc. As you can see, these incorporate some risks but they are “IT” intensive and usually a researcher or trader wouldn’t want to be responsible for them. A trader can monitor them and decide what to do if something goes wrong.
Model specific risks: specific to researchers, this really depends on what you have deployed and how possibly things could go wrong both from theoretical and implementation perspective.
System wide risks: compliance and the things you listed. Trading servers usually have an Execution Management System (EMS) which handles all the communications with the venues and double checks stuff before they go out. For example, if you have maxPositionSize per instrument you can implement a logic here which will reject anything above that and preferably notify trader. Since compliance rules can be complicated EMS are designed to handle the technical ones.
| null |
CC BY-SA 4.0
| null |
2023-04-04T07:29:11.460
|
2023-04-04T07:29:11.460
| null | null |
63042
| null |
75120
|
2
| null |
74133
|
1
| null |
In theory or when doing simulation with static orderbook your assumption is right. In general it is correct that large size orders have larger market impact. However, in real life there are other things you should take into account. Exchanges have market makers or some sort of participants who have higher priority and your market order does not necessarily reach orderbook.
As for relationship, it is arguably square-root. If you look market impact you will find enough literature you decide for yourself whether that it true or not. You can refer to [this link](https://quant.stackexchange.com/questions/41937/market-impact-why-square-root) for more info.
| null |
CC BY-SA 4.0
| null |
2023-04-04T08:23:13.657
|
2023-04-04T08:23:13.657
| null | null |
63042
| null |
75121
|
1
| null | null |
2
|
121
|
A quant interview problem:
We have $2n$ identical size balls containing $n$ colors. For each color there are two balls, one ball is heavy and the other is light. All heavy balls weigh the same. All light balls weigh the same. How many weighings on a beam balance are necessary to identify all of the heavy balls?
I know how to calculate the result for $n=3$, like we start with colors = $[\text{white}, \text{red}, \text{blue}]$. Then the first time we weigh $\text{white}_1, \text{red}_2$ comparing to $\text{white}_2$ and $\text{blue}_1$. So depending on the first outcome, we only need 2 weighs at most. But how about $2n$?
|
Solution of extension of six ball puzzle
|
CC BY-SA 4.0
| null |
2023-04-04T08:34:45.390
|
2023-04-05T21:53:10.073
|
2023-04-05T21:53:10.073
|
5656
|
60314
|
[
"information"
] |
75122
|
1
| null | null |
0
|
37
|
The majority of estimators of realized skewness and realized kurtosis rely upon high frequency data. Unlike sample skewness and sample kurtosis which is normally computed from long samples of daily or lower frequency return series, the reasoning is that the high frequency data contains information on the jump component which is what drives the third/forth moments and because of this the realized skewness won't converge to the sample skewness.
Is it possible to estimate the realized skewness or the jump components to take into account when using sample skewness with only using daily (Open, High, Low, Close, Bid/Ask, Volume) data?
|
Realized Skewness using daily data
|
CC BY-SA 4.0
| null |
2023-04-04T09:54:18.717
|
2023-04-04T09:54:18.717
| null | null |
1764
|
[
"skewness",
"realized"
] |
75123
|
1
| null | null |
1
|
22
|
I am doing research on "Financial constraints, stock return and R&D in India". we have constructed the Financial constraints index with the help of Kaplan and Zingles (1997) and Lamont et al. (2001). we find significant positive relation between FC and stock return. R&D is insignificant negatively related with stock returns. but when R&D interact with FC, then this interaction effect has significant negative impact on stock returns. We can explain it as FC increase, the R&D and stock return relation become negative. what could be the possible explanation. How I justify these results? Please help me in understanding these results. Thank you,
regards
|
Financial constraints, stock return and R&D
|
CC BY-SA 4.0
| null |
2023-04-04T10:19:27.550
|
2023-04-04T10:19:27.550
| null | null |
42614
|
[
"asset-pricing"
] |
75124
|
2
| null |
71490
|
2
| null |
For a portfolio, the relation between geometric mean return (GM) and arithmetic mean return (AM) is determined by the volatility (V) of the returns of that asset can be calculated as:
$$GM = AM - 1/2 \times V^2$$
This shows that if you lower the volatility of your portfolio without changing the arithmetic return, you will improve the geometric mean. This answers your question on the intuition for what happens here: by having not-perfect correlation between the assets you lower the portfolio volatility (diversification) relative to the case where you hold only one asset.
Apparently, in this example, the volatility used for the one asset class is
$$V = \sqrt{( 2 \times (AM-GM) )} = \sqrt{( 2 \times (0.05-0.013) )} = 27\%$$
The volatility of an equally weighted portfolio of 5 assets - each with the same volatility but with an assumed uniform mutual return correlation of 0.85 - would be:
$$V = \sqrt{( 5 \times (0.20 \times 27\%)^2 + 10 \times 2 \times 0.85 \times 0.20 \times 27\% \times 0.20 \times 27\%)} = 25.3\%$$
Since the expected arithmetic return for the five asset portfolio is still 5% (because in this example each assets is expected to return 5%) the GM of the 5-asset portfolio calculates to
$$\begin{align}GM &= AM - 1/2 V^2 \\
&= 0.05 - 1/2 \times (0.0253)^2 \\
&= 1.8\%\end{align}$$
In general GM is lower than AM because volatility detracts from multiperiod average return. A simple example
an arithmetic return of 20% followed by an arithmetic return of -10% does not result in a return of 10%. It results in a return of $1.2 \times 0.9 - 1 = 8\%$.
So the arithmetic average one-period return over the two periods is $(20\% + (-10\%))/2 = 10\%$.
The geometric average one-period return is $( (1+0.20) \times (1+0.90) )^{1/2} -1 = 3.9\%$.
| null |
CC BY-SA 4.0
| null |
2023-04-04T11:35:28.507
|
2023-04-05T07:06:56.150
|
2023-04-05T07:06:56.150
|
5656
|
66917
| null |
75126
|
1
| null | null |
1
|
92
|
In the treynor-black model the assumption is that markets are not fully optimal and it is possible to achieve additional alpha on top of the market portfolio. After a mean-variance optimization treynor-black model arrives at optimal weights for the active portfolio $w_A$
and $(1-w_A)$ is the allocation to the market portfolio.
$w_0 = \frac{\frac{\alpha_A}{\sigma_A^2(e)}}{\mu_m/\sigma^2_m}$
$w_A = \frac{w_0}{1+(1-B)w_0}$
I have troubles understanding the statement that the squared sharpe ratio of the full portfolio (containing w_A of the active portfolio and (1-w_A) of the market portfolio) equals the sum of the squared sharpe ratios, i.e.:
$Sp^2 =\{\frac{expectedreturn}{vola} \}^2= \frac{\mu^2}{\sigma_m^2} + \frac{\alpha^2}{\sigma_A^2(e)}$
Below link gives a more detailed description of the problem.
But it is also described in identical way in the book of Bodie, Kane, Marcus "investments".
[https://www.diva-portal.org/smash/get/diva2:1576276/FULLTEXT02.pdf](https://www.diva-portal.org/smash/get/diva2:1576276/FULLTEXT02.pdf)
|
Derivation Treynor-Black model
|
CC BY-SA 4.0
| null |
2023-04-04T13:37:42.287
|
2023-04-04T13:37:42.287
| null | null |
17316
|
[
"portfolio-optimization",
"sharpe-ratio",
"treynor-black"
] |
75127
|
2
| null |
45817
|
3
| null |
This is a thought exercise, and I see two ways to think about it - one from the mathematical standpoint, in which the limit value of the black-scholes model is taken as `t` approaches infinity.
But the black-scholes model (and most other option models) value the option by determining a probability distribution of the payoff at expiry. But if the option is European and the maturity is infinite, then there is no payoff and thus the option is worthless. The theoretical option value based on black-scholes might be `S0` but the practical value is zero since you can never extract that value.
It would be like being handed a check that you could never cash.
There is a similar thought exercise for non-dividend-paying stocks. If a company lives indefinitely and never pays a dividend, does its stock have any value? If you can never extract your ownership piece (which is what stocks represent) then the stock is worthless. It's only the expectation of liquidation at some point (via merger, acquisition, etc.) or siphoning off value through dividends that a stock has any present value.
That said, as an interview question I would be more impressed if someone discussed the pros and cons of each method rather than asserting that one was right and one was wrong, since the premise is impractical anyway. The point of these types of question is to see how you think and reason through a problem (obviously demonstrating a knowledge of the concepts along the way) more than coming up with the "right" answer.
| null |
CC BY-SA 4.0
| null |
2023-04-04T14:38:14.073
|
2023-04-04T14:38:14.073
| null | null |
20213
| null |
75128
|
1
| null | null |
1
|
83
|
When calculating the portfolio's historical volatility, do I need to factor the volatility of each asset individually and then do a covariance calculation or can I get away by measuring the volatility of the total portfolio return?
My gut feeling says no, but I cant work out why?
|
Shortcut for cutting portfolio volatility
|
CC BY-SA 4.0
| null |
2023-04-04T15:36:25.207
|
2023-05-07T16:05:05.703
|
2023-04-04T16:37:12.750
|
60559
|
60559
|
[
"volatility",
"portfolio-management",
"historical-data"
] |
75130
|
1
| null | null |
0
|
135
|
I like to sell uncovered put options, using my valuation of the company as the strike price. I'm looking for a tool that takes stock identifier and strike price as input and outputs the optimal expiration date, i.e. one that has the highest ratio of option price to days until expiration. Can anybody suggest anything?
|
Best tool to find an optimal option?
|
CC BY-SA 4.0
| null |
2023-04-04T17:50:13.683
|
2023-04-19T09:53:22.000
|
2023-04-10T00:26:08.303
|
54838
|
66922
|
[
"options",
"option-pricing"
] |
75131
|
1
|
75140
| null |
1
|
81
|
for Libor swaps, the accrual for the floating leg is easy as the cashflow is known already at accrual start day. The calculation would be similar to how the accrual of a bond is calculated.
How about the accrual of the floating leg of an OIS swap please? The cashflow is unknown until accrual end date.
|
how is accrual calculated on the floating leg of a OIS swap
|
CC BY-SA 4.0
| null |
2023-04-04T20:15:45.540
|
2023-04-05T16:07:11.590
| null | null |
34825
|
[
"interest-rate-swap",
"ois",
"ois-swaps",
"accrual"
] |
75132
|
1
| null | null |
0
|
20
|
I am trying to identify best practices when it comes to Treasury Bill yield calculations. Without getting too granular, our calculation will use 365 days unless Feb 29 is between settlement date and maturity date, then 366. Others have said if Feb 29 is within 1 year after issue date, regardless of maturity date, use 366. Which is correct? I am struggling with how you calculate to a leap year when the bill matures prior to the leap year.
|
Treasury Bill Yield calculation
|
CC BY-SA 4.0
| null |
2023-04-04T21:14:54.480
|
2023-04-04T21:14:54.480
| null | null |
66924
|
[
"treasury"
] |
75133
|
1
| null | null |
5
|
148
|
As the question states, what are some relevant recent papers I, as a non-expert, should read on IR modelling, products, and mechanics (that do not involve AI/ML)?
I think my knowledge on this topic has not progressed much beyond Piterbarg's Funding beyond discounting (probably still an important paper?) and Boenkost & Schmidt's Cross currency swap valuation. So I have a lot of catching up to do.
My aim is not to become an expert on IR/FI overnight, but simply to not be completely ignorant of the latest developments.
Just to illustrate what I mean, for example the paper by [Lyashenko and Mercurio, Looking Forward to Backward Looking Rates](https://www.semanticscholar.org/paper/Looking-Forward-to-Backward-Looking-Rates%3A-A-for-Lyashenko-Mercurio/31f6d9dbe871c0a331902da17b1447bcb9102753) seems very relevant, but is it one of the most important in the last 5 yrs? Guidance here would be appreciated.
As there isn't really a right or wrong answer to the question I'd probably hesitate to give an 'accepted' answer if there are multiple answers, but I'll of course give good answers +1.
|
Most relevant papers on IR / discount rate(s) modelling in the last 5 years
|
CC BY-SA 4.0
| null |
2023-04-04T21:52:58.003
|
2023-04-05T10:48:52.903
|
2023-04-05T07:59:08.293
|
65759
|
65759
|
[
"interest-rates",
"swaps",
"interest-rate-swap",
"discounting",
"cross-currency-basis"
] |
75134
|
1
| null | null |
1
|
46
|
As I understand it, time value for European options is as follows:
[](https://i.stack.imgur.com/62rG0.png)
What if r=0? Then puts should behave the same as calls, right? Would the time value always be nonnegative or could it be negative?
|
What is the Time Value of European Options if r=0?
|
CC BY-SA 4.0
| null |
2023-04-04T22:11:41.300
|
2023-04-04T22:11:41.300
| null | null |
66428
|
[
"options",
"european-options",
"time"
] |
75135
|
1
| null | null |
1
|
83
|
I am working on creating a fixed-fixed cross-currency swap pricer in Python for EUR-USD and GBP-USD pairs. Here's my current approach:
- Bootstrap a GBP SONIA curve.
- Bootstrap a SOFR curve.
- Obtain the GBPUSD xccy basis (GBP LIBOR vs USD LIBOR), e.g., y bp for 10 years.
- Compute an adjusted GBP SONIA curve by summing the GBP SONIA Curve and Xccy basis (e.g., the 10-year GBPUSD xccy basis). This is a flat shift in the SONIA curve by the xccy basis.
- Discount the USD fixed leg using the SOFR curve.
- Discount the GBP fixed leg using the adjusted SONIA curve.
- Convert all cashflows to GBP using GBPUSD spot.
- Solving for the fixed rate on one of the legs to get a 0 NPV structure
When comparing our pricing with Bloomberg, there is a 10-20 bp difference, which I assume is mostly driven by:
- Xccy basis referencing LIBOR, but we are applying the shift to the SONIA curve.
- LIBOR fallback used on one/both legs.
- No bootstrapping happening on xccy basis; just applying a pure shift of the SONIA curve.
Challenges:
- We have limited data (only SONIA swaps, SOFR swaps, GBPUSD spot and xccy basis).
- We aim to get as close as possible to Bloomberg's fixed-fixed xccy rates (within 5 bps).
- We are using Quantlib and prefer not to do our bootstrapping.
Any suggestions or insights on how to improve our pricing model while working with these constraints would be greatly appreciated!
|
Improving Fixed-Fixed Cross-Currency Swap Pricing in Python with Limited Data and Quantlib
|
CC BY-SA 4.0
| null |
2023-04-05T09:23:06.960
|
2023-04-05T09:23:06.960
| null | null |
59347
|
[
"interest-rate-swap",
"cross-currency-basis"
] |
75136
|
2
| null |
71490
|
1
| null |
Like others mentioned, it's not correlation, but the number of assets in the portfolio that draws the geometric growth rate up to the arithmetic growth rate.
The annual geometric growth rate is given by:
$r - \frac{1}{2} \sigma^2$
In the portfolio sense, $r$ is the weighted average arithmetic returns of the underlying assets, and $\sigma$ is the net volatility of the constituents. The volatility section is more complicated to calculate. For $N$ equally weighted assets, the volatility of the portfolio is given by:
$σ_{average} * \sqrt{(ρ_{average} +1)/N}$
So you could expect that for large $N$, the volatility of the portfolio decreases proportionately to $\sqrt{1/N}$.
To better answer your question, backsolving from mean to geo-mean, the volatility of 1 asset looks to be ~ 35%. For the 5 asset portfolio the average volatility looks to be ~ 41.5%. If the correlation moved to 0 the geo-mean would be 3.28%, if the correlation turned to -0.85 the geo-mean would become 4.71%.
| null |
CC BY-SA 4.0
| null |
2023-04-05T09:59:09.770
|
2023-04-05T10:53:04.080
|
2023-04-05T10:53:04.080
|
16148
|
62004
| null |
75140
|
2
| null |
75131
|
3
| null |
The accrued is just the product of the OIS rates observed so far: $A=N\times\prod_{i=0}^t(1+r_i\frac{n}{360})-1$ for $n$ days elapsed between business days (usually 1 or 3), $N$ notional and $r_i$ is the reset rate observed. Usually the day count is ACT/360.
For example, if we've entered a swap three days ago and the OIS fixing was 4.83% on all days then our accrued on 10mm notional is around $4025:
`=((1+4.83%/360)*(1+4.83%/360)*(1+4.83%/360)-1)*10000000`
| null |
CC BY-SA 4.0
| null |
2023-04-05T16:07:11.590
|
2023-04-05T16:07:11.590
| null | null |
31457
| null |
75143
|
1
| null | null |
3
|
64
|
I am currently researching the joint calibration problem of SPX options and VIX options. A question that comes to mind is the construction of each assets respective volatility surface.
In the articles I've looked there is no mention of constructing a volatility surface after having achieved a successful joint calibration.
To my knowledge, after having calibrated a model to market data it has to be checked/modified not to allow for calendar and butterfly arbitrage.
However, is this not also the case when building a volatility surface after a joint fit? Is this something that is obvious, so it is never mentioned? And lastly, even if you create arbitrage free volatility surfaces for each asset after the joint calibration, can we be certain that there is no arbitrage between the SPX and VIX?
|
Joint SPX and VIX calibration - volatility surfaces construction
|
CC BY-SA 4.0
| null |
2023-04-05T21:12:45.770
|
2023-04-05T21:12:45.770
| null | null |
33872
|
[
"options",
"arbitrage",
"vix",
"volatility-surface",
"spx"
] |
75146
|
1
| null | null |
1
|
84
|
I am a student in finance and have to work on a project for the semester.
I have to study the difference of resilience during financial crises between the 5 biggest US banks and the 5 biggest Canadian banks.
I know that Canadian banks are praised for their resilience and strong regulations.
That is why I chose the following hypothesis that I want to prove: Canadian banks are more resilient than U.S. banks during periods of financial turmoil.
My teacher wants me to use statistical methods to support my hypothesis, these methods include linear regression, variances analysis, means analysis, etc.
However, my first results are really disappointing and I wanted to have your point of view regarding what I have been doing until now.
I decided to start with the linear regression on Excel to see if I get results that could support my hypothesis.
I am currently studying 10 banks, 5 Canadian and 5 American.
Canadian banks:
- TD Bank
- Bank of Montreal (BMO)
- Royal Bank of Canada (RBC)
- The Bank of Nova Scotia
- The Canadian Imperial Bank of Commerce (CIBC)
American banks:
- Bank of America
- Wells Fargo
- JPMorgan
- Citigroup
- US Bancorp
For my first linear regression, I took one dependent variable (y) and three independent variables (x1), (x2), and (x3).
For my dependent variable (y), I chose to use the Return on Asset (ROA) of each banks from Q1 1997 to Q4 2022. Which represent 104 observations which I think is enough for my linear regression.
For my independent variables (x1), (x2), and (x3), I used:
- (x1): national inflation rates (%)
- (x2): national unemployment rates (%)
- (x3): national GDP
My data sources are Capital IQ for financial data and OECD.
My idea behind this regression is that, is there is a lower correlation between the bank's ROA and the three independent variables, it means that the bank is less impacted by the global financial and economical situation of the country and therefore, more resilient.
On the other side, if there is a higher correlation, it means that the bank is more easily impacted, and therefore, less resilient.
However, I am very disappointed by my results. The average R and R² of Canadian banks are respectively 0.43 and 0.20 and for U.S. banks they are around 0.45 and 0.22. There is a slight difference but nothing crazy. I was expecting a bit more to be honest.
And for the P-values, they are often above 0.05 which, if I am not wrong, is not great. (Should be lower than 0.05)
Can I have your thoughts on my subject? Do you think I am missing something or just not during it the right way?
I would be glad to have your reviews and suggestions. It would greatly help me.
Thank you very much!
|
How to analyse the resilience of banks during financial crises using linear regression and other statistical methods?
|
CC BY-SA 4.0
| null |
2023-04-06T11:08:49.190
|
2023-04-06T11:58:07.853
|
2023-04-06T11:58:07.853
|
848
|
66935
|
[
"statistics",
"finance-mathematics",
"mathematics",
"statistical-finance"
] |
75148
|
2
| null |
37239
|
0
| null |
You should try to start with implied volatilities as long as you have other financial instruments on similar underlyings that have good liquidity and can be priced using Black 76.
In this case, let's say you want to price treasury bond options. These are most likely OTC traded. Assume there are abundant trades on treasury futures options(related to bonds) traded on exchanges. Then it is a good idea to get a volatility surface from treasury futures options of different maturities and use these vols as input parameters.
Historical vols are backward-looking and your options are forward-looking, so it would be more natural to use implied vols in general. However in markets where products are thinly traded and implied vols are not stable or even available, you can use historical vols as long as it justifies your goals.
| null |
CC BY-SA 4.0
| null |
2023-04-07T02:46:46.690
|
2023-04-07T02:46:46.690
| null | null |
41885
| null |
75149
|
1
| null | null |
0
|
47
|
In Mark Joshi's "The concepts and practice of mathematical finance" section 1.2, it is given an intuitive motivation behind "high risk high returns" claim. It goes as follows:
Given that all assets are correctly priced by the market, how can we distinguish
one from another? Part of the information the market has about an asset is its riskiness.
Thus the riskiness is already included in the price, and since it will reduce the
price, the value of the asset without taking into account its riskiness must be higher
than that of a less risky asset. This means that in a year from now we can expect
the risky asset to be worth more than the less risky one. So increased riskiness
means greater returns, but only on average - it also means a greater chance of
losing money.
I really can't understand this. Would someone suggest the right point of view one should adopt when reading this sencence? Thanks a lot in advance for any help.
|
Intuition behind risk-return realation (Mark Joshi's concepts 1.2)
|
CC BY-SA 4.0
| null |
2023-04-07T09:24:50.720
|
2023-04-07T09:30:24.290
| null | null |
58982
|
[
"risk",
"intuition"
] |
75150
|
2
| null |
75149
|
2
| null |
Investors want risk to be compensated, therefore they will only buy more risky assets if the price is right, that is only when they can expect more returns compared to less risky assets.
| null |
CC BY-SA 4.0
| null |
2023-04-07T09:30:24.290
|
2023-04-07T09:30:24.290
| null | null |
848
| null |
75151
|
2
| null |
75128
|
0
| null |
Two portfolios can consist of different assets, but can have the same volatility over the same period of time, so individual analysis of the assets seems futile in this sense.
| null |
CC BY-SA 4.0
| null |
2023-04-07T15:45:32.770
|
2023-04-07T15:45:32.770
| null | null |
15898
| null |
75152
|
2
| null |
18007
|
1
| null |
The intuition is much simpler in Bachelier setting. The portfolio of being long one each of all out-of-money puts plus all out-of-money calls has a quadratic payoff, $(S_t-S_0)^2$ times a constant depending on the distance between strikes, $\Delta K$. Now, this being a parabole with constant convexity, such a portfolio has a constant gamma, and hedging consists of hedging a parabola whereby delta is simply $(S_t-S_0)$. That's all.
| null |
CC BY-SA 4.0
| null |
2023-04-07T20:21:21.147
|
2023-04-07T20:21:21.147
| null | null |
48123
| null |
75154
|
1
|
75155
| null |
0
|
100
|
The value of a stock is the present value of all future dividends. This is sometimes called the Gordon Growth model. This model assumes that dividends increase at a constant rate. In the real world that may not be right. In addition, there is no way to know what the long term rate of the dividend growth will be.
One way, is to look at the current dividend rate and the dividend rate a while back. Say 10 years. You can them compute the annualized dividend rate.
It seems to me a better approach, would be to look at the annual dividend rate, say for 10 years and compute a exponential function for the dividend history. This function would be computed using a least squares approach possible giving more weight to recent dividends.
Would it be better to use the second approach over the first approach in computing the rate of growth in the dividend?
Please comment.
|
Modeling the price of a stock based upon its dividend history
|
CC BY-SA 4.0
| null |
2023-04-08T00:48:39.657
|
2023-04-09T14:17:07.197
|
2023-04-08T10:36:21.420
|
18353
|
18353
|
[
"equities",
"dividends"
] |
75155
|
2
| null |
75154
|
1
| null |
Your question borders on opinion-based, but I'll try.
Corporations' earnings and profits are volatile and unpredictable.
Publicly traded corporations' common share prices are even more volatile, driven by a lot of factors other than expectations of dividends and boards believe that investors prefer dividend rates to be less volatile and unpredictable.
In order to attenuate the inherent volatility, boards often announce plans for dividend rate and its growth. Although the plans are non-binding, corporations sometimes borrow money in order to pay dividends or otherwise use money that arguably might be spent better long term. In other words, people sometimes try hard to wipe out the information contained in share price and earnings time series.
So, what can you do, nevertheless, with a dividend rate time series? You could, for starters, see how well your model price, i.e. the present value of the indicated future dividends, explains the observed share price. Many others have looked at this, so you should have little trouble comparing your findings with published papers.
You could look at further inputs, such as industry, history of earnings, and history of debts, to characterize corporations that historically were better at making their dividends predictable, e.g. regulated utilities. You could try to predict which ones are likely to fail or succeed at this. You could even try to investigate how surprise changes in divident rates affect stock prices.
| null |
CC BY-SA 4.0
| null |
2023-04-08T13:46:37.380
|
2023-04-09T14:17:07.197
|
2023-04-09T14:17:07.197
|
36636
|
36636
| null |
75156
|
2
| null |
75154
|
2
| null |
The Gordon Growth Model (GGM) is just a simple model. I don't imagine any serious user of the GGM believes in its assumptions or that calculating the true value of a stock is as simple as plugging in some numbers.
Your approach might indeed lead to better estimates as it uses more data than the GGM but past dividends alone are not enough to accurately value a stock. Disadvantages of your approach are that it's more complex and requires more data.
| null |
CC BY-SA 4.0
| null |
2023-04-08T13:49:02.950
|
2023-04-08T16:02:13.707
|
2023-04-08T16:02:13.707
|
848
|
848
| null |
75158
|
1
|
75159
| null |
0
|
36
|
I'm solving the following problem:
>
Two dealers compete to sell you a new Hummer with a list price of \$45,000. Dealer C offers to sell it for \$40,000 cash. Dealer F offers “0-percent financing:” 48 monthly payments of \$937.50. (48x937.50=45,000) (a) You can finance purchase by withdrawals from a money market fund yielding 2% per year. Which deal is better?
So I need to calculate the present value of the financing option with a yearly rate of $r=0.02$, and monthly cashflows of $C=\\\$937.50$. My logic for this is that we need to first convert the annual interest rate to the monthly rate so that $r\to r/12$. Moreover, we need to ensure that our discounting is consistent, in that $(1+r)^T$ represents $T$ years of the time horizon. Therefore, the exact expression for the present value is
$$
PV = C \sum_{n=1}^{48} \left(\frac{1}{\left(1+\frac{r}{12}\right)^{1/12}}\right)^n.
$$
However, the official solution to the problem states that
$$
PV = C \sum_{n=1}^{48} \left(\frac{1}{\left(1+r \right)^{1/12}}\right)^n.
$$
So the only difference is that the official solution compounds the yearly interest for each payment, and mine compounds on a monthly basis. So my question is which one is the correct solution.
|
Correct form of discount rate
|
CC BY-SA 4.0
| null |
2023-04-08T19:09:10.100
|
2023-04-08T19:22:10.667
|
2023-04-08T19:14:32.513
|
66956
|
66956
|
[
"fixed-income"
] |
75159
|
2
| null |
75158
|
1
| null |
The official solution is correct. Consider the case where $r = 0.02$. The monthly rate in this case is $1.02^{1/12}$ or about $1.0016516$.
By the way, $(1 + 0.02/12)^{1/12}$ is about $1.0001388$. That is not going to annualize to 2%.
| null |
CC BY-SA 4.0
| null |
2023-04-08T19:22:10.667
|
2023-04-08T19:22:10.667
| null | null |
18353
| null |
75160
|
1
| null | null |
1
|
52
|
This question is from Joshi's quant book.
Assume r = 0, σ = 0.1, T-t = 1, X = 100 = S(t).
Initially, the call is worth $3.99.
The first question asks what the value of the call is after a 98 percentile drop in S(t).
That was simple. Z = -2.05 is our critical value so plugging that into the distribution of S(T) (which I assumed to be log-normal), I get that the new S is 81.03 and the call is now worth $0.06.
The second question asks:
What if the stock had a distribution with fatter tails than a log-normal distribution. Would the value of the call be more or less?
My initial thoughts:
(1) Fatter tails means the 98 percentile will be further away compared to a lighter tail distribution so the 98 percentile drop in a fatter-tailed distribution will be lower compared to the 98 percentile drop in the log-normal.
(2) But, when calculating the call value using discounted expected payoff under the risk-neutral measure, a fatter-tail distribution will have greater probability of 'hitting' higher values that could cause it to finish ITM compared to a lighter-tailed distribution.
I'm not sure which effect wins out? Any guidance?
|
Call Value After 98 percentile drop in Stock Price
|
CC BY-SA 4.0
| null |
2023-04-08T22:04:54.140
|
2023-05-09T09:05:00.560
| null | null |
66638
|
[
"value-at-risk",
"call"
] |
75161
|
2
| null |
75160
|
2
| null |
I think that the Fatter tails => extreme events are more likely => vol increases => option price (keeping all other variables constant) increases
| null |
CC BY-SA 4.0
| null |
2023-04-09T07:35:12.597
|
2023-04-09T07:35:12.597
| null | null |
66960
| null |
75162
|
1
| null | null |
0
|
40
|
I'm working on a script to calculate and plot the Hurst Exponent and Smoothed Hurst Exponent for a stock's historical price data using Python. When I run the script, I face two major issues:
The Smoothed Hurst Exponent and the Hurst Exponent values are the same. I expect the Smoothed Hurst Exponent to be different from the Hurst Exponent, as it should be a moving average of the Hurst Exponent.
The plotting doesn't seem to be done correctly. I'm trying to plot the Hurst Exponent, the Smoothed Hurst Exponent, and the confidence intervals, but the resulting plot doesn't display the data as expected.
I'm looking for help in identifying the issues in my code that are causing these problems and suggestions on how to fix them.
Any assistance would be greatly appreciated
My code is as follows:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import yfinance as yf
from scipy.stats import linregress
# (a) - inputs
def inputs(barsize=100, slen=18, Min=8, Max=2):
return barsize, slen, Min, Max
# (b) - Declaration
Min = 8
Max = 2
fluc = np.full(10, np.nan)
scale = np.full(10, np.nan)
slope = np.nan
# (c) - SS function
def ss(series, period):
PI = 2.0 * np.arcsin(1.0)
SQRT2 = np.sqrt(2.0)
_lambda = PI * SQRT2 / period
a1 = np.exp(-_lambda)
coeff2 = 2.0 * a1 * np.cos(_lambda)
coeff3 = - np.power(a1, 2.0)
coeff1 = 1.0 - coeff2 - coeff3
filt1 = np.zeros_like(series)
for i in range(2, len(series)):
filt1[i] = coeff1 * (series[i] + (series[i - 1] if i - 1 >= 0 else 0)) * 0.5 + coeff2 * filt1[i - 1] + coeff3 * filt1[i - 2]
return filt1
# (d) - Calculations
def RMS(N1, N, csum):
seq = np.arange(1, N + 1)
y = csum[N1 : N1 + N]
sdx = np.std(seq) * np.sqrt(N / (N - 1))
sdy = np.std(y) * np.sqrt(N / (N - 1))
cov = np.cov(seq, y, bias=True)[0, 1] * (N / (N - 1))
r2 = np.power(cov / (sdx * sdy), 2)
rms = np.sqrt(1 - r2) * sdy
return rms
def Arms(bar, csum, barsize):
num = np.floor(barsize / bar).astype(int)
sumr = sum(RMS(i * bar, bar, csum) for i in range(num))
avg = np.log10(sumr / num)
return avg
def fs(x, barsize, Min, Max):
return np.round(Min * np.power(np.power(barsize / (Max * Min), 0.1111111111), x)).astype(int)
def hurst_exponent(close, barsize=100, slen=18, Min=8, Max=2):
# Calculate Log Return
r = np.log(close / np.roll(close, 1))
# Mean of Log Return
mean = np.convolve(r, np.ones(barsize) / barsize, mode="valid")
mean = np.pad(mean, (barsize - 1, 0), 'constant', constant_values=0)
# Calculate Cumulative Sum
csum = np.cumsum(r - mean)
# Set Ten Points of Root Mean Square Average along the Y log axis
fluc = np.array([Arms(fs(i, barsize, Min, Max), csum, barsize) for i in range(10)])
# Set Ten Points of data scale along the X log axis
scale = np.array([np.log10(fs(i, barsize, Min, Max)) for i in range(10)])
# Calculate Slope Measured From RMS and Scale on Log log plot using linear regression
slopes = np.array([np.cov(scale, fluc, bias=True)[0, 1] / np.var(scale, ddof=0) for i in range(len(close) - barsize + 1)])
# Calculate Moving Average Smoothed Hurst Exponent
smooth = ss(slopes, slen)
# Calculate Critical Value based on Confidence Interval (95% Confidence)
ci = 1.645 * (0.3912 / np.power(barsize, 0.3))
# Calculate Expected Value plus Critical Value
cu = 0.5 + ci
cd = 0.5 - ci
return slopes, smooth, cu, cd
# (e) - Plots
def plot_hurst_exponent(close, barsize=100, slen=18, Min=8, Max=2):
slopes, smooth, cu, cd = hurst_exponent(close, barsize, slen, Min, Max)
# Color of HE
c = "green" if slopes[-1] > cu else "blue" if slopes[-1] >= 0.5 else "red" if slopes[-1] < cd else "orange" if slopes[-1] < 0.5 else "black"
# Text of Table
text = "Significant Trend" if slopes[-1] > cu else "Trend" if slopes[-1] >= 0.5 else "Significant Mean Reversion" if slopes[-1] < cd else "Mean Reversion" if slopes[-1] < 0.5 else "N/A"
# Plotting
fig, ax = plt.subplots()
# Hurst Exponent
ax.plot(slope, label="Hurst Exponent", color=c, linewidth=2)
# Confidence Interval
ax.axhline(cu, label="Confidence Interval", color="purple", linestyle="--")
ax.axhline(cd, label="Confidence Interval", color="purple", linestyle="--")
# Moving Average
ax.plot(smooth, label="MA", color="gray", linewidth=1)
# 0.5 Mid Level
ax.axhline(0.5, color="black", linestyle="dashed")
# Display legend and text
ax.legend()
plt.title(f"Hurst Exponent: {slopes[-1]:.3f} ({text})")
print(f"Hurst Exponent: {slopes[-1]:.3f}")
print(f"Smoothed Hurst Exponent: {smooth[-1]:.3f}")
plt.show()
# Example usage
import yfinance as yf
# Fetch historical stock data for Apple Inc. (AAPL)
ticker = "AAPL"
data = yf.download(ticker, start="2020-01-01", end="2021-01-01")
# Use the 'Close' column for Hurst Exponent calculation
close_prices = data['Close'].values
plot_hurst_exponent(close_prices)
```
|
Hurst Exponent and Smoothed Hurst Exponent values are the same and incorrect plotting
|
CC BY-SA 4.0
| null |
2023-04-09T08:35:59.707
|
2023-04-09T08:35:59.707
| null | null |
66961
|
[
"programming",
"quant-trading-strategies",
"finance-mathematics",
"mathematics",
"financial-engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.