Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
75163
|
1
| null | null |
1
|
24
|
Lets say I have 100 million dollars. My active risk budget is 5%. I have an active fund that has active risk of 10%. What will be my mix in dollar terms of this active port and passive etf (assume passive ETF has 0 active risk)?
I have 2 answers, 1 says its "Allocation to active fund = Active risk budget / Active risk of active fund" ie in this case will be 5/10 = 50% into active fund.
The other answer is Active Fund Allocation = minimum(Active Fund Active Risk, Active Risk Budget) ie only 5% to Active fund (5 million dollars)
Which one is right? Thank you in advance!
|
How do I allocate between passive and active strategy using active risk budget?
|
CC BY-SA 4.0
| null |
2023-04-09T09:38:29.470
|
2023-04-09T09:38:29.470
| null | null |
66962
|
[
"portfolio-optimization",
"risk-management"
] |
75164
|
1
| null | null |
4
|
459
|
I'm trying to grasp what exactly the effects of higher ongoing interest rates are on holding calls/puts. I am not asking what the effect of a change in interest rates is on call/put prices.
I'm interested chiefly in equity and equity-index options (though I expect the answers are the same for bond futures, replacing dividend yield with coupon yield?).
I understand the cost of carry on equity or equity index futures (I hope) okay. After all, it's pretty easy. Futures price = underlying price + [normalised] interest rate - [normalised expected] dividend yield. So while you carry (long) a future its value decays in line with the interest rate (assuming $r>d$), since the price of the future at expiry converges with the underlying. And obviously a shorted future's value appreciates in line with the interest rate.
Now I also understand (I hope) that one can create a synthetic future using a long ATM call and a short ATM put. I'd be rather surprised if the cost of carry for this synthetic future was any different than for the actual future. So I'd expect that the combined cost of carry of the sold put and the bought call to come to (interest rate - dividend yield) as per a long future position.
What I want to know is how this cost of carry is to be divided between the constituents (long call, short put) of the synthetic future, if one holds one or the other. I assume that it is related to moneyness - that a deep ITM call has cost of carry nearly equal to that of the long future (at which price the deep OTM put has almost zero cost of carry). And that a deep ITM short put also has cost of carry equal to that of the long future (at which price the deep OTM call has almost no cost of carry).
If I were to hazard a guess as to what proportion of the future's cost of carry is payable on a put or call I would guess that it is in proportion to delta (e.g. long put cost of carry = delta * long future's cost of carry, which would be of opposite sign since the delta of the long put is negative). But that's a guess.
Can anyone enlighten me further?
Thanks
|
Questions on options cost of carry, and relationship to futures cost of carry
|
CC BY-SA 4.0
| null |
2023-04-09T11:53:40.610
|
2023-04-20T10:50:33.443
|
2023-04-09T13:51:54.550
|
16148
|
66748
|
[
"options",
"interest-rates",
"futures"
] |
75165
|
1
| null | null |
0
|
55
|
It is easy to download Bloomberg data in Python or R using their blpapi API. The only requirement is that one is logged on the local Bloomberg terminal.
We often find ourselves working on publishing platforms such as Posit Connect or Anaconda Enterprise. Do you know if it possible to download Bloomberg data there? The issue is that these platforms usually run on external servers, where a user doesn't have access to his/her local Bloomberg terminal.
But do you know of any solution that allows for this?
|
Bloomberg access from publishing platforms such as Posit Connect
|
CC BY-SA 4.0
| null |
2023-04-09T15:10:18.407
|
2023-04-09T16:19:13.463
| null | null |
66172
|
[
"data",
"bloomberg"
] |
75166
|
2
| null |
75165
|
1
| null |
Can you run a local proxy that has access to Bloomberg and access that from Posit or Anaconda.
However, you need to check your contract with Bloomberg: Is what you’re trying to do even allowed? Maybe it’s best to get on the phone with them and figure out what is possible. I don’t think you’ll be the first to ask.
| null |
CC BY-SA 4.0
| null |
2023-04-09T15:49:29.630
|
2023-04-09T15:49:29.630
| null | null |
848
| null |
75167
|
2
| null |
75165
|
3
| null |
There is a reason why you need to be logged on the bloomberg terminal - it is a desktop solution. As such, you are not allowed to share / distribute or use your data on an enterprise level or with other user.
If you require this, there are specific data license solutions as well as the so called Server API ([SAPI](https://www.bloomberg.com/professional/product/server-api/)) offering. I think the latter is still quite limited with regards to sharing data. As Bob Jansen suggested, it is best to get in touch with BBG directly.
Side remark; technically R is not supported by Bloomberg's API directly. Rblapi is a third party interface that connect to the C++ API Bloomberg offers.
| null |
CC BY-SA 4.0
| null |
2023-04-09T16:19:13.463
|
2023-04-09T16:19:13.463
| null | null |
54838
| null |
75168
|
2
| null |
75130
|
3
| null |
This makes little sense. An option with the same strike, does not get much more expensive if you add a day to expiration. However, if you divide by the number of days, you reduce the value of the option significantly every day....
Example in Python (using standard Black Scholes Merton, thus ignoring that deep ITM puts, in the presence of positive interest rates $r>0$, can be subject to early exercise):
```
import numpy as np
from scipy.stats import norm
def BSM(S,K,r,d,t, sigma, cp_flag):
d1 = ((np.log(S/K) + (r - d + 0.5 * sigma **2) * t) / (sigma * np.sqrt (t)))
d2 = d1 - sigma * np.sqrt(t)
opt = cp_flag*S *np.exp(-d*t)* norm.cdf(cp_flag*d1) - cp_flag* K * np.exp(-r*t) * norm.cdf(cp_flag*d2)
return opt
pd.DataFrame(zip([i for i in range(1,365)], [BSM(100,100,0,0,i/365, 0.3, -1) for i in range(1,365)], [BSM(100,100,0,0,i/365, 0.3, -1)/i for i in range(1,365)]), columns=['Days','Put Value', 'Put/Days'])
```
[](https://i.stack.imgur.com/PW0wh.png)
[](https://i.stack.imgur.com/n03M4.png)
This simple example even ignores that IV is usually flattening with increased maturity (which will make the longer tenors comparatively cheaper). For example, the 165 strike Apple put option expiring on 14th of April (8 days as of the last available price) costs ~ USD 2.1 at the time of writing: $2.1/8 = 0.2625$.
The one expiring on 18th of August (134 days) costs ~ USD 10. $10/134 = 0.0746$.
| null |
CC BY-SA 4.0
| null |
2023-04-09T17:07:20.123
|
2023-04-09T17:07:20.123
| null | null |
54838
| null |
75169
|
2
| null |
75115
|
6
| null |
Bartlett's delta as computed in your code is a simple finite difference (FD), also called bump and reprice, of the Black values. I do not think there is anything wrong here, besides the fact that you are not comparing like for like. Long story short, Bartlett's delta
more accurately describes the change in the value of the option because it "incorporates" a Vega (or vanna /Ddelta Dvol) component. For certain conditions, this can result in different signs relative to Black deltas (which ignore the shape of the vol surface).
The simple Black delta is a "holding everything else constant" delta, meaning IVOL is fixed and stable. What you compute with the Bartlett delta is a FD value with different IVs for both shifts.
Using your `haganLogNormalApprox` function results in the following vols (I renamed some variables due to personal preference):
```
F_0=0.07942844548137806
small_figure = 0.0001
y=0.10
expiry=0.25
alpha_0=0.127693338654331
beta=0.5
rho=-0.473149790316068
nu=2.46284420168144
F_pos = F_0 + small_figure
F_neg = F_0 - small_figure
avg_alpha_pos = (alpha_0 + (rho * nu / math.pow(F_0 , beta )) * small_figure )
avg_alpha_neg = (alpha_0 + (rho * nu / math.pow(F_0 , beta )) * (-small_figure ))
vol_0 = vol_0 = haganLogNormalApprox(y=0.10, expiry=0.25, F_0=F_0,
alpha_0=0.127693338654331, beta=0.5, nu=2.46284420168144, rho=-0.473149790316068)
vol_neg = haganLogNormalApprox (y, expiry , F_neg ,avg_alpha_neg , beta ,nu , rho)
vol_pos = haganLogNormalApprox (y, expiry , F_pos ,avg_alpha_pos , beta ,nu , rho)
print(f' Vol Black = {vol_0}')
print(f' Vol Positive = {vol_pos}')
print(f' Vol Negative = {vol_neg}')
vol_avg = (vol_pos + vol_neg)/2
print(f' Vol Average (Centred) = {vol_avg}')
```
Result:
```
Vol Black = 0.44137660374291093
Vol Positive = 0.4396541985333539
Vol Negative = 0.4431007329951456
Vol Average (Centred) = 0.44137746576424974
```
I also rewrote Black because I find it easier to work with less convoluted functions:
```
import numpy as np
from scipy.stats import norm
def B(F,K,t, sigma, cp_flag):
d1 = ((np.log(F/K) + 0.5 * sigma **2 * t) / (sigma * np.sqrt (t)))
d2 = d1 - sigma * np.sqrt(t)
opt = cp_flag*(F*norm.cdf(cp_flag*d1) - K*norm.cdf(cp_flag*d2))
delta = cp_flag*norm.cdf(cp_flag*d1)
return opt, delta
```
Now, let's plug in the numbers:
```
up = B(F_pos,0.10,0.25, vol_pos, 1)
down = B(F_neg,0.10,0.25, vol_neg , 1)
centred = B(F_0,0.10,0.25, vol_0 , 1)
print(f'Call Val Up = {up[0]}, Delta Up = {up[1]}')
print(f'Call Val Down = {down[0]}, Delta Down = {down[1]}')
print(f'Call Val Centred = {centred[0]}, Delta Centred = {centred[1]}')
print(f'Delta FD = {(up[0]-down[0])/(2.0*small_figure)}')
print(f'Delta FD centred = {(B(F_pos,0.10,0.25, vol_0, 1)[0]- B(F_neg,0.10,0.25, vol_0, 1)[0])/(2.0*small_figure)}')
```
to obtain
```
Call Val Up = 0.0015010173779386893, Delta Up = 0.1756511129543487
Call Val Down = 0.0015012761421948125, Delta Down = 0.17503196098493057
Call Val Centred = 0.0015011434512114622, Delta Centred = 0.1753400660949036
Delta FD = -0.0012938212806158644
Delta FD centred = 0.1753410636751336
```
As you can see, using a constant vol (the average of the up and down vol specifically) results in pretty much the same delta as in Black. What you plug into FD is a different IV for up and down, which is in line with the SABR model. I am using the formula from this [answer](https://quant.stackexchange.com/a/74208/54838), which are the rquations shown on P.91 of the article [Managing Smile risk](http://web.math.ku.dk/%7Erolf/SABR.pdf) from Hagan et. al in the Willmott magazine. Re-written in Python, the code looks like this:
```
def vol(β,α, ρ, ν, t_ex, f, K):
A = α /(((f*K)**((1-β)/2))*(1+((1-β)**2)/24*np.emath.logn(2,(f/K))+ ((1-β)**4)/1920*np.emath.logn(4,(f/K))))
B = 1+(((1-β)**2)/24*(α**2/(f*K)**(1-β))+(1/4)*α*β*ρ*ν/((f*K)**((1-β)/2))+(2-3*ρ**2)/24*ν**2)*t_ex
z = ν/α*(f*K)**((1-β)/2)*np.log(f/K)
χ_z = np.log((np.sqrt(1-2*ρ*z+z**2)+z-ρ)/(1-ρ))
atm = α/(f**(1-β))*(1+(((1-β)**2)/24*(α**2/(f*K)**(1-β))+(1/4)*α*β*ρ*ν/((f*K)**((1-β)/2))+(2-3*ρ**2)/24*ν**2)*t_ex)
cond = f==K
return atm if cond else A*z/χ_z*B
```
If we plot this as a function of F, it is clear that it makes sense for IV to be above Black IV if F is shocked down, and below Black IV is F is shocked up, even if you would not adjust $α$ (which makes the effect a tad more pronounced):
```
β, α, ρ, ν, t_ex, f = 0.5, 0.127693338654331, -0.473149790316068, 2.46284420168144, 0.25, 0.07942844548137806
K=0.1
plt.plot( np.arange(0.05,0.14,0.003), [vol(β, alpha_0, ρ, ν, t_ex, F, K) for F in np.arange(0.05,0.14,0.003)], label = 'SABR Vols')
font = {'family':'serif','color':'blue','size':20}
plt.xlabel("Forward",fontdict = font)
plt.ylabel("IVOL",fontdict = font)
plt.title("SABR Model",fontdict = font)
plt.vlines(F_0, 0, vol(β, alpha_0, ρ, ν, t_ex, F_0, K), colors ="r", label = 'Current Forward')
plt.legend(loc = 'lower right')
plt.show()
```
[](https://i.stack.imgur.com/Nnwvw.png)
The red line corresponds to the IV at the current forward. Despite being a different implementation, the resulting IV is almost identical to your implemention and also results in a similar negative delta when using a FD method with adjusted IV.
After all, the entire reason for Bartlett (2006) providing a refined delta under the SABR model is to account for the effect of vol. If you move along the vol surface when the underlying changes, you will have different market values, not only due to changes in the underlying, but also due to changes in IV.
However, it was shown that for a portfolio that is both delta and vega hedged, the original SABR Greeks given by Hagan et al. (2002) provide essentially the same result as Bartlett’s new SABR Greeks. See for example [Hedging under SABR](https://www.researchgate.net/publication/273718080_Hedging_under_SABR_model) model by Bruce Bartlett in Wilmott Feb 2006. This claim is also easy to verify if we add vega to the Black formula:
```
def B(F,K,t, sigma, cp_flag):
d1 = ((np.log(F/K) + 0.5 * sigma **2 * t) / (sigma * np.sqrt (t)))
d2 = d1 - sigma * np.sqrt(t)
opt = cp_flag*(F*norm.cdf(cp_flag*d1) - K*norm.cdf(cp_flag*d2))
delta = cp_flag*norm.cdf(cp_flag*d1)
vega = F*norm.pdf(d1)*np.sqrt(t)
return opt, delta , vega
```
If you compute the new option value for a $F_{pos}$ value of the underlying, you get (using the new vol according to the vol surface):
```
res = B(F_pos,0.10,0.25, vol_pos , 1)
print(f'Call New = {res[0]}')
```
Call New = 0.0015010173779386893
Using a Taylor approximation with Black delta would yield:
```
print(f'Value according to Black Delta = {centred[0] + centred[1]*(F_pos-F_0)}')
```
Value according to Black Delta = 0.001518677457820953
However, this is too high. Using Bartlett's delta (which is in this case negative) gives
```
print(f"Value according to Bartlett's Delta = {centred[0] + (up[0]-down[0])/(2.0*small_figure)*(F_pos-F_0)}")
```
Value according to Bartlett's Delta = 0.0015010140690834006
which is almost identical to the value obtained by using Black delta and Vega together (and very close to the actual value of Black had you computed it outright without Greeks).
```
print(f'Value according to Black Delta and Vega = {centred[0] + centred[1]*(F_pos-F_0) + centred[2]*(vol_pos-vol_0)}')
```
Value according to Black Delta and Vega = 0.0015010228771048103
| null |
CC BY-SA 4.0
| null |
2023-04-09T23:16:32.363
|
2023-04-10T13:11:17.407
|
2023-04-10T13:11:17.407
|
54838
|
54838
| null |
75172
|
1
| null | null |
0
|
42
|
Given a window, expected return divided by standard deviation is sharpe's ratio.
But I want to form another figure for mdd-adjusted return.
mdd divided by expected return can be suggested but it seems inappropriate.
How can I make such a formula?
|
alternatives of sharpe's ratio with respect to maximum-drawdown(mdd)
|
CC BY-SA 4.0
| null |
2023-04-10T08:08:53.340
|
2023-04-10T08:08:53.340
| null | null |
36599
|
[
"sharpe-ratio",
"maximum-drawdown"
] |
75174
|
1
|
75178
| null |
1
|
85
|
I am currently doing a thesis on a class of SDE parameter inference methods and using the SABR model as an example for inference. I want to extend the application to market data. My question is does it make sense to use a forward curve (e.g 3m LIBOR) as the input to my method and infer the SABR parameters (excluding beta) this way and then price derivatives? I only ask as usually it seems derivative prices are the input to the parameter fitting not the forward itself.
Thank you in advance.
|
Can you use a forward rate curve to infer the SABR model parameters?
|
CC BY-SA 4.0
| null |
2023-04-10T15:40:28.023
|
2023-04-11T07:31:57.170
| null | null |
66977
|
[
"interest-rates",
"stochastic-volatility",
"sabr"
] |
75176
|
1
| null | null |
1
|
119
|
I'm trying to learn how to simulate the GARCH(1,1) for option pricing using Monte Carlo. I need to learn how to code the equations for the stock log returns and the variance process. I'm trying to reproduce the simple example given in [Duan (2000)](http://www.math.ntu.edu.tw/%7Ehchen/jointseminar/garchopt.pdf), where all the information is given even the randomness for reproduction purposes. However, I can't really get the exact answer as in the given example. I think that I'm not coding the equations correctly. Below is my code
```
S0 <- 51 #current stock price
r <- 0.05 # interest rate
sd1 <- 0.2 # initial standard deviation
K <- 50 # strike price
T2M <- 2 # 2 days to maturity
sim <- 10 # number of simulations
beta0 <- 1e-05
beta1 <- 0.8
beta2 <- 0.1
theta <- 0.5
lambda <- 0.3
# given randomness term for reproducible simulation
z1 <- c(-0.8131,-0.547,0.4109,0.437,0.5413,-1.0472,0.3697,-2.0435,-0.2428,0.3091)
z2 <- c(0.7647, 0.5537, 0.0835, -0.6313, -0.1772, 2.4048, 0.0706, -1.4961,-1.376,0.3845)
# error term
Z <- unname(cbind(z1,z2))
# error under risk neutral probability Q
ZQ <- Z + lambda
# one-step variance need previous step as input (GARCH(1,1)) under Q
variance <- function(beta0, beta1, beta2, theta, lambda, z, Sigma){
return(beta0 + beta1 * Sigma^2 + beta2*Sigma^2*(z -theta -lambda)^2)
}
# one-step stock process
S_T <- function(S0, r, Sigma, T2M, z){
# It returns an array of terminal stock prices.
return(S0*exp((r + lambda*Sigma - Sigma^2/2) + Sigma*sqrt(T2M)*z))
}
# payoff function for European call
payoff_call <- function(S_T, K, r, T2M){
# It returns an array which has the value of the call for each terminal stock price.
return(exp(-r*T2M)*pmax(S_T-K, 0) )
}
# calculate one-step stock price
S1 <- S_T(S0, r, sd1, T2M=T2M/2, z=ZQ[,1])
S1
# calculate standard deviation at step 2
sd2 <- sqrt(variance(beta0, beta1, beta2, theta, lambda, ZQ[,1], sd1))
sd2
```
My output for the standard deviation is
```
0.1972484 0.1907743 0.1790021 0.1789577 0.1789325 0.2039248 0.1791031 0.2405984 0.1849784 0.1793203
```
and stock at time 1 is
```
50.36042 53.11321 64.32868 64.66536 66.02844 48.05690 63.80079 39.37478 56.44494 63.03220
```
whereas in the example the author have different output
[](https://i.stack.imgur.com/6z9DX.jpg)
I would appreciate the help.
|
GARCH process simulation in R
|
CC BY-SA 4.0
| null |
2023-04-10T20:47:33.523
|
2023-05-20T20:03:20.380
|
2023-04-13T08:46:10.443
|
19645
|
61212
|
[
"option-pricing",
"garch",
"simulations"
] |
75178
|
2
| null |
75174
|
1
| null |
The SABR model is really Black Scholes in the volatility dimension. This means that volatility is not constant but a stochastic process itself. Hence, σ itself is governed by an SDE, just like the forward rate (as assumed in Black Scholes). That will be the problem with your approach in my opinion. If you look at the forward rate, you miss out on the entire information of the volatility component itself.
Option implied Vols frequently exhibit very pronounced smiles (IVOL for far OTM options are significantly larger compared to ATM), meaning they imply different vol processes. Quoting from [Just What You Need To Know About Variance Swaps - JP Morgan Equity Derivatives](https://www.sk3w.co/documents/volatility_trading.pdf)
>
For each strike and maturity there is a different implied volatility
which can be interpreted as the market’s expectation of future
volatility between today and the maturity date in the scenario implied
by the strike. For instance, out-of-the money puts are natural hedges
against a market dislocation (such as caused by the 9/11 attacks on
the World Trade Center) which entail a spike in volatility; the
implied volatility of out-of-the money puts is thus higher than
in-the-money puts.
Implied vol is also not directly related to historical vol (the vol of the underlying) for at least two reasons:
1 ) Empirically, IV tends to overestimate RV, commonly referred to as [Volatility Risk Premium](https://towardsdatascience.com/what-is-the-volatility-risk-premium-634dba2160ad)
2 ) IV is the only free parameter in the Black-Scholes-Merton (BSM) model. Higher IV can be a result of compensation for tail risk.
For example, if you look at GBPUSD, you have Brexit as a major event. Uncertainty meant that IVOL not only anticipated the higher [realized / historical vol](https://quant.stackexchange.com/a/71790/54838), but also meant that it was heavily skewed towards OTM Puts (on GBP). Below is a screenshot of the smile on the day of Brexit and during normal times.
[](https://i.stack.imgur.com/OBSZx.png)
Given β, the SABR model parameters describe the shape of the surface and the SABR price correction is much stronger away from the money, resulting in a volatility smile:
Once you have $\beta$,
- $\alpha$ mainly controls the overall height (like CEV),
- $\rho$ (correlation) controls the skew (for set beta) and
- $\nu$ (vol of vol) controls the smile
The resulting vol surface looks like this (taken from [this answer](https://quant.stackexchange.com/a/63750/54838)):
][2](https://i.stack.imgur.com/WveEE.gif)
Nonetheless, it is an interesting question and research idea. You can get market vols from Bloomberg for example (many universities and libraries like the New York Public Library have Windows PCs running Bloomberg Terminal software). You can also look at [this answer](https://quant.stackexchange.com/q/73449/54838) for a working quantlib code that you can use to fit the SABR model to market quotes. [Here](https://quant.stackexchange.com/a/68195/54838) is a PySABR code.
| null |
CC BY-SA 4.0
| null |
2023-04-11T07:31:57.170
|
2023-04-11T07:31:57.170
| null | null |
54838
| null |
75179
|
1
| null | null |
1
|
44
|
Under Black-Scholes, there exists a solution for the option price for a volatility. The volatility can then be backed out from the option price using numeric methods.
For the constant-elasticity of volatility model (CEV), there isn't a closed form solution for the price as a function of vol; it has to be calculated using the chi-squared distribution. If I want to get the "CEV-implied volatility", it would be very slow for me to use this numeric method as part of another numeric method. Is there any work on this?
|
Is there anyway to compute the CEV-implied volatility from option prices?
|
CC BY-SA 4.0
| null |
2023-04-11T07:50:36.530
|
2023-04-11T07:50:36.530
| null | null |
66985
|
[
"volatility",
"black-scholes",
"implied-volatility"
] |
75181
|
1
| null | null |
1
|
58
|
I'm trying to understand Least-Square Monte Carlo approach for pricing american options. I'm familiar with Tsitsiklis and van Roy (2001) approach where we are going backwards with:
- $V_T = h(S_T)$, where $h$ is a payoff from option if exercise
- for each next step (previous time step on the grid) we have $V_{t_{i-1}}=\max\{{(K-S_{t_{i-1}})^+, E[DF \times V_{t_i}}|\mathcal{F_{t_{i-1}}}]\}$ where $DF$ is a discount factor from $t_{i}$ to $t_{i-1}$ and this expected value is called Continuation Value.
Now, after reading the original paper of Longstaff-Schwarz algorithm, my thoughts are that it works as follows
- when we are going backwards, we don't use $V_{t_i}$ as $Y$ in our regression, but we use the sum of discounted cash-flows from a given path which occurs after time $t_{i-1}$. In case of an american option, we simply take the cashflow from the date when we exercise it (or 0 if there is no exercise on that path). Here at each step we need to adjust the cashflows so that if we exercise at time $t_{i-1}$, the cashflows after that time are set to 0.
I believe that up to that point I'm correct. But now I've read Glasserman book about MC simulations in Finance and I found the following there:
[](https://i.stack.imgur.com/zGvjr.png)
From above, it seems that the only difference between LS and TvR is that in LS, at each step we do not set $C$ (continuation value, conditional expectation) as a value of a trade if it exceede exercise value, but we set the discounted value from the previous step. There is nothing mentioned about summming cash-flows from the future. Is it correct? How is it equivalent to the approach described above where we regress the sum of discounted cashflows instead of discounted value from the previous step?
So in short, what is correct in LS algorithm:
- use the sum of discounted realized future cash-flows as $Y$ for Continuation Value estimation and set this $C$ as option value if it exceedes the exercise value
or
- use the value from the previous step as $Y$ for $C$ estimation but don't set this continuation value as option value even if it exceede the exercise value - in that case use the disconted option value from the previous step.
Additional questions:
- LS recommend to use only ITM paths for regression. But what value should be set for OTM paths? Should we simply set the discounted value from the previous step or we should set the continuation value from the regression? I.e. OTM paths shouldn't be used to estimate the regression coefficient s but those coefficient should be used for these OTM paths to determine Continuation Value?
References
F. A. Longstaff and E. S. Schwartz, “Valuing American options by simulation: A simple least-squares approach,” Rev. Financial Studies, vol. 14, no. 1, pp. 113–147, 2001 ([link](https://people.math.ethz.ch/%7Ehjfurrer/teaching/LongstaffSchwartzAmericanOptionsLeastSquareMonteCarlo.pdf))
J. N. Tsitsiklis and B. Van Roy, "Regression Methods for Pricing Complex American-Style Options", IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 4, JULY 2001 ([link](https://www.mit.edu/%7Ejnt/Papers/J086-01-bvr-options.pdf))
P. Glasserman: Monte Carlo Methods in Financial Engineering, Springer, 2003
|
Longstaff-Schwarz LS Monte Carlo - which approach is correct?
|
CC BY-SA 4.0
| null |
2023-04-11T10:03:44.697
|
2023-04-11T10:47:55.483
|
2023-04-11T10:47:55.483
|
16148
|
66988
|
[
"regression",
"american-options",
"simulations",
"longstaff-schwartz"
] |
75182
|
1
| null | null |
11
|
169
|
I was wondering if there is any industry standard in pricing very short dated options, from say 6h options down to 5 minute options.
My thinking is that as time to expiry gets shorter and shorter, the stock price should resemble more and more a GBM, so a naive B&S should do the job.
With the rise of 0DTE options I'm curious if any practitioner has insights behind pricing those. Also what about binary options exchanges, with like 5-10 minute options. Are those even priced as options?
|
How to price very short dated options?
|
CC BY-SA 4.0
| null |
2023-04-11T13:16:43.550
|
2023-04-14T07:08:35.090
| null | null |
46758
|
[
"options",
"option-pricing",
"pricing"
] |
75183
|
1
| null | null |
1
|
90
|
I am working on finding the expected return of bonds. To find the expected return for corporate bonds I have been using a transition matrix showing the probability for it to default and a recovery rates. But, currently I am stuck on finding how to find the expected annual return of government bonds. Would it just me the YTM? Can anyone help?
Thanks
|
Bond Annual Expected Return
|
CC BY-SA 4.0
| null |
2023-04-11T15:54:51.863
|
2023-04-12T12:46:49.073
|
2023-04-11T15:56:31.667
|
66991
|
66991
|
[
"fixed-income"
] |
75184
|
1
|
75191
| null |
2
|
69
|
I am studying this time dependent Heston model
\begin{equation}
dS_t=(r-q) dt +\sqrt{V_t} dW_t^1 \\
dV_t=\kappa_t(\theta_t-V_t) dt + \sigma_t dW_t^2 \\
S_0=s_0\\
V_0=v_0\\
\rho_t=<dW_t^1,dW_t^2>
\end{equation}
I wrote a program using Elice method and tried to compare my result with Shamim Afshani paper [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1615153](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1615153). Using Elice method I have a good matching with the paper, However my Monte Carlo routine seems not giving the results that I want. Below you find the example with python code.
For the initial values $S0 = 1$, $V0 = 0.1$ and the time points $t_0 = 0$, $t_1 = 1$, $t_2 = 3$ and $t_3 = 5$, we specify
$r_t = 0$, $q_t = 0$, $\theta_t = 0.1$, $\sigma_t = 1$, $\rho_t = -0.9$ and $\kappa_t =\sum_{m=1} \kappa_mI_{[t_{m-1}<t\leq t_m]}$ where $\kappa_1 = 4$, $\kappa_2 = 2$ and
$\kappa_3 = 1$
|Strike |Afshani price |Elice price |Monte carlo Milstein price |Monte carlo Broadie price |
|------|-------------|-----------|--------------------------|-------------------------|
|0.5 |0.548724 |0.548733 |0.551647 |0.547670 |
|0.75 |0.370421 |0.370423 |0.376329 |0.3697154 |
|1 |0.230355 |0.230357 |0.23865 |0.229919 |
|1.25 |0.129324 |0.129328 |0.138600 |0.12882076 |
|1.5 |0.063974 |0.063981 |0.072626 |0.063716 |
Is there any research paper that studies Monte Carlo time dependant Heston model with no feller condition
|
Piecewise constant Heston model Monte Carlo simulation
|
CC BY-SA 4.0
| null |
2023-04-11T19:01:09.750
|
2023-04-12T19:59:18.240
|
2023-04-12T19:58:09.463
|
65967
|
65967
|
[
"monte-carlo",
"heston",
"time"
] |
75185
|
1
| null | null |
2
|
103
|
i have a confusion regarding how the basis converges in a couple of scenarios. Lets assume I am long UST CTD Basis
- Say the curve is upward sloping:
optimally, i would choose to make delivery of the bond at the last notice date (say last trade date is 1week before the last notice date).
I would choose not to deliver at the last trading date as I can benefit from +ve carry for the next 1 week (+ don't want to squander my end of month option). Aside from the fact that the futures price ~ (Spot - 1w carry - EOM option PV)/CF, at the last trading date, can i say anything else about the basis (net of carry)? At this point it should probably just be positive a few ticks?
- Say the yield curve is downward sloping:
There is a trade off between the carry and the end of month (EOM) optionality. So there are 2 scenarios, say:
a). -ve Carry outweights the optionality lost.
My optimal strategy here would be to save the -ve carry and deliver as soon as possible. Say this date is the last trading date of the contract. In this scenario as the negative 1w Carry more than offsets the EOM option PV => the converted futures price > spot price (therefore, basis is -ve) at the last trading date. But how can basis be -ve? As I can arbitrage this if I had no position by buying the CTD and selling a future and immediately delivering into this (this it self would push the basis back to 0).
b). -ve Carry doesn't outweight the optionality lost.
This would revert to something similar to point 1.
|
Futures basis (Bond) optimal delivery
|
CC BY-SA 4.0
| null |
2023-04-12T05:26:02.847
|
2023-05-31T18:15:51.263
|
2023-05-31T18:15:51.263
|
848
|
65739
|
[
"fixed-income",
"bond",
"arbitrage",
"basis"
] |
75186
|
2
| null |
75183
|
1
| null |
Forecasting expected returns on sovereign bonds over a time period is equivalent to forecasting expected coupon payments (straightforward under a no-default assumption) and forecasting expected price changes (very challenging). So, while the initial yield does offer some predictive value, the total return over a time period will be driven by realized yield changes.
A significant amount of research has gone into finding the best set of variables for predicting US bond returns. "Forecasting U.S. Bond Returns" by Antti Ilmanen (Journal of Fixed Income, 1997) offers an accessible introduction to this topic along with a set of variables (Term Spread, Real Yield, Inverse Wealth, Momentum) that show some predictive ability. A more recent study, "Predicting Bond Returns: 70 Years of International Evidence" by Baltussen et al (Financial Analysts Journal, 2021), revisits this study and others using a more comprehensive set of data and finds statistically significant bond return predictability. These two papers should give you an excellent entry point into this area.
| null |
CC BY-SA 4.0
| null |
2023-04-12T12:46:49.073
|
2023-04-12T12:46:49.073
| null | null |
50071
| null |
75188
|
2
| null |
70421
|
0
| null |
Going directly to the question while reading [the Grossman-Miller paper in question](https://pages.stern.nyu.edu/%7Elpederse/courses/LAP/papers/InventoryRisk/GrossmanMiller.pdf) and specifically equations 2a - 2c: G-M uses W for total wealth, B for holdings in cash, x for holdings in the security, and P for their unit price. In the question, X is for cash holdings, q for security holdings and S for the security price.
As a specific notation for the total wealth lacks in the question, it instead reuses X, with the right sub- and superfixes, to denote the cash value of the total wealth.
Considering this should dissolve the confusion. For a walkthrough, see Peter Fazekas answer!
| null |
CC BY-SA 4.0
| null |
2023-04-12T15:00:51.783
|
2023-04-12T15:20:03.920
|
2023-04-12T15:20:03.920
|
23886
|
23886
| null |
75189
|
1
| null | null |
0
|
35
|
The Boost C++ Libraries provide a set of statistical distributions in their Math Toolkit library. The best candidate I can find among those available that will capture skew and kurtosis typically present in financial returns is the [non-central t-distribution](https://www.boost.org/doc/libs/1_81_0/libs/math/doc/html/math_toolkit/dist_ref/dists/nc_t_dist.html). The input parameters are the degrees of freedom and a skewness parameter.
If I wanted to use this to model financial returns, however, I would also need to adjust for location and scale. This can be done fairly easily for the regular (symmetric) t-distribution. Supposing I had a random sample from a t-distribution, say $\{x_i\}$, and I had estimates for the degrees of freedom, location (mean in this case), and scale values, $m$ and $s$. Then, I could get a set of simulated returns $\{y_i\}$ by applying the location-scale transformation
$$y_i = s x_i + m$$
My (perhaps naïve) questions are:
- Would the above transformation for the symmetric case be reasonably close to the non-central case if the skew is "reasonably small"?
- Alternatively, could the same linear form be used, but perhaps with a more robust estimate for $s$ in the case of the non-central t-distribution (presumably using the mean for $m$ would still be appropriate given the shifted standard normal random variable in the pdf for the non-central t-distribution)? If so, what would be this preferred estimate? Could it be derived from the data similar to $s$ in the case of the symmetric t-distribution, ie similar to
$$s = \sqrt{\frac{\nu - 2}{\nu}}$$ where $\nu$ = the degrees of freedom, and $\sigma$ is the standard deviation in the symmetric case?
- Or, finally, would we need to scrap the linear form above and use an alternative? If so, what might it be?
I have checked a number of research papers and other resources on the internet, but while there is ample information on the distribution itself and other applications, answers to the above have proven elusive so far.
|
Estimating Returns with the Non-Central t-distribution
|
CC BY-SA 4.0
| null |
2023-04-12T19:40:35.800
|
2023-04-12T19:40:35.800
| null | null |
66999
|
[
"programming",
"returns",
"simulations",
"asset-returns"
] |
75190
|
2
| null |
75176
|
0
| null |
To be honest, I don't quite understand the steps in the linked PDF. The std. normals don't like standard normals at all, the absolute values are too high.
I did manage to match your stock prices with only two minor changes:
```
# error under risk neutral probability Q
ZQ <- Z - lambda # was plus
```
The interest rate and volatility are annualized but the simulations are based on days. Often, annualization is done based on 250 or 252 days, in this case 365 days are used. The code becomes
```
days_per_year <- 365
S1 <- S_T(S0, r / days_per_year, sd1 / sqrt(days_per_year), T2M=T2M/2, z=ZQ[,1])
```
This last issue is without a doubt a bug in the original code. 5% daily interest doesn't make sense.
| null |
CC BY-SA 4.0
| null |
2023-04-12T19:40:40.690
|
2023-04-20T19:42:32.950
|
2023-04-20T19:42:32.950
|
848
|
848
| null |
75191
|
2
| null |
75184
|
0
| null |
Indeed Broadie and Kaya method works fine in comparaison of Milstein method I have updated the table.
| null |
CC BY-SA 4.0
| null |
2023-04-12T19:59:18.240
|
2023-04-12T19:59:18.240
| null | null |
65967
| null |
75193
|
1
| null | null |
2
|
65
|
In the paper of [Duan (1995)](https://onlinelibrary.wiley.com/doi/10.1111/j.1467-9965.1995.tb00099.x#:%7E:text=the%20GARCH%20option%20pricing%20model%20is%20capable%20of%20reflecting%20the,with%20the%20Black%2DScholes%20model.) the author compare European call option prices using Black-Scholes model vs. GARCH(1,1)-M model (GARCH-in-mean). To be brief, the author fits the following GARCH(1,1)-M model
$$\ln(X_t/X_{t-1}) = r + \lambda \sqrt{h_t} - \frac{1}{2} h_t + \varepsilon_t $$
$$ \varepsilon_t |\phi_{t-1} \sim N(0,h_t) \quad \mbox{under measure P} $$
$$ h_t = \alpha_0 + \alpha_1 \varepsilon_{t-1}^2 + \beta_1 h_{t-1} \tag{1}$$
to S&P 100 daily index series from `1986-01-02` to `1989-12-15`, where
$r$ is the risk-free interest rate, $\lambda$ is the constant unit risk premium and $\phi_t$ is the information set. For GARCH(1,1), $X_t$ and $h_t$ together serve as the sufficient statistics for $\phi_t$.
The author also provides the risk-neutral dynamics of (1) as follows
$$\ln(X_t/X_{t-1}) = r - \frac{1}{2} h_t + \xi_t $$
$$ \xi_t |\phi_{t-1} \sim N(0,h_t) \quad \mbox{under measure Q} $$
$$ h_t = \alpha_0 + \alpha_1 (\xi_{t-1}-\lambda \sqrt{h_{t-1}})^2 + \beta_1 h_{t-1} \tag{2}$$
They also state that the price of a European call option with exercise price $K$ is given by
$$C^{GH}_0 = e^{-rT} \mathbb{E}^Q \bigg[ \max(X_T - K, 0) \bigg]$$
where the terminal asset price is derived as
$$X_T = X_0 \exp \bigg[rT -\frac{1}{2} \sum_{s=1}^T h_s + \sum_{s=1}^T \xi_s \bigg] \tag{3} $$
For the computation part, the author provides the estimated GARCH parameters and some other assumptions used in their calculations. Their assumptions and results are in the following R code
```
#----------------------------------------------------
# information and results given by the author
#----------------------------------------------------
# estimated parameters of the GARCH(1,1)-M model
alpha0 <- 1.524e-05
alpha1 <- 0.1883
beta1 <- 0.7162
lambda <- 7.452e-03
r <- 0 # interest rate
K <- 1 # strike price is set at $1 (stated like this in the paper)
T2M <- c(30, 90, 180) # days to maturity
S_over_K <-c(.8, .9, .95, 1, 1.05, 1.1, 1.2) # moneyness ratios considered (stated like this in the paper)
Sigma <- 0.2413 / sqrt(365)
simulations <- 5*10^4 # Monte Carlo simulations
maturity <- c(rep(T2M[1], length(S_over_K)), rep(T2M[2], length(S_over_K)), rep(T2M[3], length(S_over_K)))
SK_ratio <- rep(S_over_K, length(T2M))
# Black-Scholes option prices
BS_price <- c(.1027, 18.238,89.79, 276.11, 600.41, 1027.9, 2001, 13.016, 118.54, 257.79, 478, 779.68, 1152.2, 2036.6, 66.446, 261.3, 438.17, 675.54, 970.49, 1318, 2133.4)
# GARCH prices when sqrt(h_1) = Sigma
GH_price <- c(.9495, 20.93, 86.028, 266.75, 596.13, 1030.2, 2003.5, 15.759, 116.06, 251.02, 468.9, 772.35, 1149.4, 2040.5, 68.357, 257.09, 431.7, 668.5, 964.29, 1313.9, 2134.7)
df <- data.frame(maturity, SK_ratio, BS_price, GH_price)
df
```
I'm trying to get their results by using the following code.
```
#-------------------------------------------------------
# my attempt
#-------------------------------------------------------
library(rugarch)
library(stats)
library(quantmod)
library(PerformanceAnalytics)
library(fOptions)
# download S&P100 daily index series
fromDate <- "1986-01-02"
toDate <- "1989-12-15"
quantmod::getSymbols("^OEX", from = fromDate, to = toDate)
# calculate the log returns
sp100ret <- na.omit(PerformanceAnalytics::CalculateReturns(OEX$OEX.Close, method = "log"))
# set GARCH(1,1)-M model specification
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0, 0), include.mean=TRUE, archm=TRUE, archpow=1),
variance.model = list(model = "sGARCH", garchOrder=c(1,1)),
distribution.model = "norm")
# fit the model to the daily returns data
sp100Fit <- ugarchfit(data = sp100ret, spec = garchspec)
# output the GARCH estimated parameters
garchcoef <- coef(sp100Fit)
garchcoef
# assume the initial conditional variance = stationary level i.e, sqrt(h_1) = Sigma
h_1 <- alpha0/(1-alpha1 - beta1)
S0 <- S_over_K * K
my_BS_price <- c()
# calculate Black-Scholes prices
for (i in 1:3){
my_BS_price <- c(my_BS_price, fOptions::GBSOption(TypeFlag="c", S=S0, X=K, Time=T2M[i], r=r, sigma=h_1, b=r)@price)
}
# create empty matricies for the standard normal random variables under Q
epsilonQ_T30 <- matrix(stats::rnorm((T2M[1]+1)*simulations, 0, 1), nrow = T2M[1]+1, ncol= simulations)
epsilonQ_T90 <- matrix(stats::rnorm((T2M[2]+1)*simulations, 0, 1), nrow = T2M[2]+1, ncol= simulations)
epsilonQ_T180 <- matrix(stats::rnorm((T2M[3]+1)*simulations, 0, 1), nrow = T2M[3]+1, ncol= simulations)
epsilonQ_T30[1,] <- 0
epsilonQ_T90[1,] <- 0
epsilonQ_T180[1,] <- 0
h_T30 <- matrix(nrow = T2M[1]+1, ncol = simulations)
h_T90 <- matrix(nrow = T2M[2]+1, ncol = simulations)
h_T180 <- matrix(nrow = T2M[3]+1, ncol = simulations)
h_T30[1,] <- h_1
h_T90[1,] <- h_1
h_T180[1,] <- h_1
# for loop to calculate the conditional variance process Eq(2) maturity 30
for (t in 2:31){
h_T30[t, ] <- alpha0 + alpha1 * h_T30[t-1,] * (epsilonQ_T30[t-1,] - lambda)^2 + beta1 * h_T30[t-1,]
epsilonQ_T30[t, ] <- epsilonQ_T30[t, ] * sqrt(t) + lambda * t # risk-neutral
}
# for loop to calculate the conditional variance process Eq(2) maturity 90
for (t in 2:91){
h_T90[t, ] <- alpha0 + alpha1 * h_T90[t-1,] * (epsilonQ_T90[t-1,] - lambda)^2 + beta1 * h_T90[t-1,]
epsilonQ_T90[t, ] <- epsilonQ_T90[t, ] * sqrt(t) + lambda * t # risk-neutral
}
# for loop to calculate the conditional variance process Eq(2) maturity 180
for (t in 2:181){
h_T180[t, ] <- alpha0 + alpha1 * h_T180[t-1,] * (epsilonQ_T180[t-1,] - lambda)^2 + beta1 * h_T180[t-1,]
epsilonQ_T180[t, ] <- epsilonQ_T180[t, ] * sqrt(t) + lambda * t # risk-neutral
}
# remove first row
h_T30 <- h_T30[-c(1), ]
h_T90 <- h_T90[-c(1), ]
h_T180 <- h_T180[-c(1), ]
epsilonQ_T30 <- epsilonQ_T30[-c(1),]
epsilonQ_T90 <- epsilonQ_T90[-c(1),]
epsilonQ_T180 <- epsilonQ_T180[-c(1),]
# calculate the exponential part in terminal value of stock Eq(3)
ST_expPart_T30 <- exp(r*T2M[1] - 0.5 * colSums(h_T30) + colSums(sqrt(h_T30)) * colSums(epsilonQ_T30))
ST_expPart_T90 <- exp(r*T2M[2] - 0.5 * colSums(h_T90) + colSums(sqrt(h_T90)) * colSums(epsilonQ_T90))
ST_expPart_T180 <- exp(r*T2M[3] - 0.5 * colSums(h_T180) + colSums(sqrt(h_T180)) * colSums(epsilonQ_T180))
# calculate the terminal value of the stock for each initial value Eq(3)
ST_T30 <- matrix(nrow = length(S0), ncol=simulations)
ST_T90 <- matrix(nrow = length(S0), ncol=simulations)
ST_T180 <- matrix(nrow = length(S0), ncol=simulations)
for(i in 1:length(S0)){
ST_T30[i,] <- S0[i] * ST_expPart_T30
ST_T90[i,] <- S0[i] * ST_expPart_T90
ST_T180[i,] <- S0[i] * ST_expPart_T180
}
# calculate expectation of the discounted call option payoff
C_T30 <- c()
C_T90 <- c()
C_T180 <- c()
for(i in 1:length(S0)){
C_T30[i] <- exp(-r*T2M[1]) * mean( pmax(ST_T30[i,] -K, 0) )
C_T90[i] <- exp(-r*T2M[2]) * mean( pmax(ST_T90[i,] -K, 0) )
C_T180[i] <- exp(-r*T2M[3]) * mean( pmax(ST_T180[i,] -K, 0) )
}
my_GH_price <- c(C_T30, C_T90, C_T180)
my_df <- data.frame(maturity, SK_ratio, my_BS_price, my_GH_price)
my_df
```
where
$$\xi_t = \sqrt{h_t} \varepsilon^Q_t ;\quad \varepsilon^Q_t \sim N(0,t) \quad \mbox{under measure Q or }\quad \varepsilon^Q_t \sim N(\lambda t,t) \quad \mbox{under measure P}$$
I have three problems.
- I didn't get the exact estimated parameters as theirs. Is the ugarchspec wrong?
- I used the author's estimated parameters to calculate option prices using Black-Scholes formula, but I didn't get the same numbers. I don't understand why the BS-price is not the same. Any idea?
- I didn't get the same numbers as the paper for the GARCH model prices either. Is there anything wrong with how I modeled the risk-neutral dynamics of (2)?
I appreciate the help.
|
Problem matching prices of Black-Scholes vs. GARCH(1,1) in Duan (1995)
|
CC BY-SA 4.0
| null |
2023-04-12T21:35:08.410
|
2023-04-13T08:51:12.763
|
2023-04-13T08:51:12.763
|
19645
|
61212
|
[
"option-pricing",
"black-scholes",
"monte-carlo",
"risk-neutral-measure",
"garch"
] |
75194
|
1
| null | null |
4
|
96
|
Assuming the usual setup of:
- $\left(\Omega, \mathcal{S}, \mathbb{P}\right)$ our probability space endowed with a filtration $\mathbb{F}=\left(\mathcal{F}_t\right)_{t\in[0,T]}$,
- $T>0$ denoting the option maturity,
- an $\mathbb{F}$-adapted process $Z=\left(Z_t\right)_{t\in[0,T]}$ modeling the discounted value of the option payoff at time $t$;
Why do we define the problem of pricing an American option as:
$$
{\text{ess}\sup}_{\tau\in\mathrm{T}_{[0, T]}} \mathbb{E}\left[Z_{\tau}|\mathcal{F}_0\right]
$$
and not as:
$$
{\text{ess}\sup}_{s\in[0, T]} \mathbb{E}\left[Z_{s}|\mathcal{F}_0\right]?
$$
In the above $\mathrm{T}_A$ is the set of all stopping times (with respect to our filtration $\mathbb{F}$) with values in the set $A$.
Non-mathematical common sense suggests that the option holder is basically only interested in a moment $s$ when to exercise the option optimally, so why should he be interested in optimizing over all stopping times?
My further doubts stem from the fact that every $s\in[0,T]$ is obviously also a stopping time, therefore we have an inclusion of the second formulation in the first and it would appear reasonable to state that:
$$
{\text{ess}\sup}_{s\in[0, T]} \mathbb{E}\left[Z_{s}|\mathcal{F}_0\right] \leq {\text{ess}\sup}_{\tau\in\mathrm{T}_{[0, T]}} \mathbb{E}\left[Z_{\tau}|\mathcal{F}_0\right].
$$
On the other hand, I am aware that finding the optimal stopping time provides the optimal moment of exercise, since our stopping times take values in $[0,T]$.
Therefore I have this uncomfortable feeling, that the first formulation might provide a bigger optimal value (because we are optimizing over a broader family of arguments) than the second, whereas I would imagine that both formulations should amount to the same result.
What sort of fallacies am I committing in following the presented line of thought?
|
American option pricing formulation
|
CC BY-SA 4.0
| null |
2023-04-12T23:09:30.203
|
2023-04-16T07:44:54.447
|
2023-04-16T07:44:54.447
|
67001
|
67001
|
[
"american-options",
"stochastic-control",
"stopping-time"
] |
75195
|
1
|
75375
| null |
1
|
134
|
I'm looking at the level-2 data of some ETFs and trying to figure out whether it's possbile to design a market making strategy. When looking at the data, I find these ETFs to be highly liquid: at the top of the book, all price levels are next to each other by just one price tick and with very large volume, leaving me no space to insert my orders and the only choice seems to be waiting in the queue.
I have no experience in market making and the situation seems to indicate that there's no liquidity premimum any more to capture in these assets. I'm guessing there are already many designated market makers working on these assets and competiting with each other, thus leaving no room for later players.
My question is, what should a market maker do if he/she tries to enter this market, or how do existing market makers gain advantage over competitors?
|
How to make markets for highly liquid assets?
|
CC BY-SA 4.0
| null |
2023-04-13T06:14:06.287
|
2023-04-28T14:03:54.477
| null | null |
42028
|
[
"market-making"
] |
75196
|
2
| null |
75194
|
1
| null |
In explicitly wording my own question yesterday and naming my doubts, I think I may have stumbled upon the explanation:
- On the one hand, indeed we have
$$
{\text{ess}\sup}_{s\in[0,T]}\mathbb{E}\left[Z_s|\mathcal{F}_0\right] \leq {\text{ess}\sup}_{\tau\in\mathrm{T}_{[0,T]}}\mathbb{E}\left[Z_\tau|\mathcal{F}_0\right].
$$
- On the other, the optimal stopping time $\tau^\star$ obtained from solving
$$
{\text{ess}\sup}_{\tau\in\mathrm{T}_{[0,T]}}\mathbb{E}\left[Z_\tau|\mathcal{F}_0\right]
$$
provides us with an optimal time $t^*$ for the option exercise, as it takes values in $[0,T]$.
Speaking more formally, for any arbitrary stopping time $\tau\in\mathrm{T}_{[0, T]}$ we define the moment of exercise as:
$$
\sup\{t\in[0, T]:\left\{\tau \leq t\} = \emptyset\right\}
$$
i.e. the last moment $t\in[0,T]$ such that the event $\{\tau\leq t\}$ is empty and $\mathbb{P}\left(\{\tau\leq t\}\right) = 0$ holds.
With that in mind, we can write
$$
{\text{ess}\sup}_{\tau\in\mathrm{T}_{[0,T]}}\mathbb{E}\left[Z_\tau|\mathcal{F}_0\right] = \mathbb{E}\left[Z_{\tau^\star}|\mathcal{F}_0\right] = \mathbb{E}\left[Z_{t^\star}|\mathcal{F}_0\right].
$$
It remains to demonstrate that our stopping time derived exercise moment $t^\star$ is indeed equal to the one from the second formulation. To that end, let us assume that $s^\star$ is the optimal exercise time derived from solving the deterministic formulation:
$$
{\text{ess}\sup}_{s\in[0,T]}\mathbb{E}\left[Z_s|\mathcal{F}_0\right] = \mathbb{E}\left[Z_{s^\star}|\mathcal{F}_0\right].
$$
On the one hand we have:
$$
\mathbb{E}\left[Z_{s^\star}|\mathcal{F}_0\right] = {\text{ess}\sup}_{s\in[0,T]}\mathbb{E}\left[Z_s|\mathcal{F}_0\right] \leq {\text{ess}\sup}_{\tau\in\mathrm{T}_{[0,T]}}\mathbb{E}\left[Z_\tau|\mathcal{F}_0\right] = \mathbb{E}\left[Z_{t^\star}|\mathcal{F}_0\right].
$$
On the other, trivially (by definition of $\text{ess}\sup$ and $s^\star$ being the value that realizes it) we have:
$$
\mathbb{E}\left[Z_{t^\star}|\mathcal{F}_0\right] \leq \mathbb{E}\left[Z_{s^\star}|\mathcal{F}_0\right].
$$
These two together gives us:
$$
\mathbb{E}\left[Z_{t^\star}|\mathcal{F}_0\right] = \mathbb{E}\left[Z_{s^\star}|\mathcal{F}_0\right]
$$
which implies $t^\star=s^\star \text{a.s.}$.
Therefore solving the optimal stopping problem (to which optimal stopping theory lends itself nicely as a tool) solves the deterministic formulation too.
I will leave this answer for now to gather feedback and comments.
| null |
CC BY-SA 4.0
| null |
2023-04-13T09:21:56.560
|
2023-04-16T07:44:00.850
|
2023-04-16T07:44:00.850
|
67001
|
67001
| null |
75197
|
1
| null | null |
0
|
91
|
Lets say we have a volatility surface for the SPX at time t with spot S. We consequently know the price of some call option at maturity T with strike K. What is the risk neutral expectation of the option price at t+1day, if spot moves 1% ( E[option price | S*1.01] ) ?
|
What is the risk neutral expectiation of an option price given a move in spot?
|
CC BY-SA 4.0
| null |
2023-04-13T11:52:37.907
|
2023-05-13T16:04:45.520
|
2023-04-13T14:08:11.573
|
65391
|
65391
|
[
"options",
"option-pricing",
"risk-neutral-measure",
"expected-value"
] |
75198
|
2
| null |
75164
|
2
| null |
Interest rates have an effect on options in two ways:
- the forward: the higher the interest rates, the higher the forward. Thus with a higher forward, calls become more expensive and put the opposite.
- The present value of the premium: The price of the option = PV(E[Payoff]). So with higer interest rates, premiums prices will go down, both for calls and puts, regarding this effect.
Now how do we link these effects to carry?
The first one will be indeed proportional to delta, as you expect the spot to climb a little bit every day until it reaches the forward. Every day spot does not go up, the call will depreciate a little bit due to this effect, and the put will appreciate. If it is a deep ITM option it will have a large effect, as the option will be comparable to a forward outright.
The second effect will also depend if the option is ITM or OTM but in another way: if the option is ITM it will have more premium than its counterpart. The carry that this generates is simply E[premium at $t+1$] - premium at $t_0$, which is basically removing 1 day of discounting from the E[payoff].
| null |
CC BY-SA 4.0
| null |
2023-04-13T12:12:45.427
|
2023-04-14T07:09:51.083
|
2023-04-14T07:09:51.083
|
19645
|
65391
| null |
75199
|
2
| null |
75197
|
1
| null |
Your question is unclear / lacks relevant detail, but I suspect what you're really asking is what happens to the vol change when the spot change is given.
Assume, as an example, the following dynamics:
\begin{align}
dS_t &= \sigma_t S_t dW_t \\
d\sigma_t &= \alpha \sigma_t ( \rho dW_t + \sqrt{1-\rho^2} dZ_t)
\end{align}
with $dW dZ = 0$.
You'd like to calculate $E_t[ C(S_{t+dt},\sigma_{t+dt},K) | dS_t = c]$. Now
$$
C(S_{t+dt},\sigma_{t+dt},K) = C(S_t,\sigma_t, K) + dC(S_t,\sigma_t,K)
$$
with
$$
dC(S_t,\sigma_t,K) = \Delta dS_t + \nu d\sigma_t
$$
with $\Delta$ the delta of the option and $\nu$ the vega.
Given $dS_t = c$ then
\begin{align}
dS_t &= c\\
d\sigma_t &= \frac{\rho \alpha}{S_t} dS_t + (\cdot) dZ_t \\
& = \frac{\rho \alpha}{S_t} c + (\cdot) dZ_t
\end{align}
Thus
\begin{align}
E_t[ C(S_{t+dt},\sigma_{t+dt},K) |dS_t = c] &= C(S_t,\sigma_t,K) + c \Delta + \frac{\rho \alpha}{S_t} c \nu + E[ (\cdot) dZ_t] \\
&= C(S_t,\sigma_t,K) + c \Delta + \frac{\rho \alpha}{S_t} c \nu
\end{align}
| null |
CC BY-SA 4.0
| null |
2023-04-13T15:11:37.877
|
2023-04-13T15:40:37.220
|
2023-04-13T15:40:37.220
|
65759
|
65759
| null |
75200
|
1
| null | null |
0
|
26
|
I don't understand how Bloomberg quotes ATM swaptions, they just show the same volatility/premium and don't separate calls and puts. Are they the same? How does it tie to normal volatility? Thanks
|
Can someone explain to me how volatility/premium works for ATM swaptions? Why are they the same for calls and puts?
|
CC BY-SA 4.0
| null |
2023-04-13T15:15:49.647
|
2023-04-13T15:15:49.647
| null | null |
67013
|
[
"swaption"
] |
75202
|
1
| null | null |
0
|
48
|
In [Heston Model](https://en.wikipedia.org/wiki/Heston_model) we have two Wiener processes: One for the asset price, the other for the volatility. The model assumes a correlation $\rho$ between the two processes.
I ask what nature this correlation has. According to implementations I've seen online, it seems to be a simple linear correlation. Meaning, for example, if the the current (time $t$) $W^s_t$ price process has z-score of, say $+2$, and if the correlation $\rho$ is positive than it put higher likelihood of having positive z-score for the current volatility process $W^u_t$.
But why assume this kind of correlation between the processes? what does reality reveals? positive or negative for, say, index fund? Intuitively, I would expect a correlation but not linear like this - maybe taking into account absolute value of the z-score of the $W^s_t$
|
What is the correlations between the Wiener processes in Heston Model?
|
CC BY-SA 4.0
| null |
2023-04-13T20:22:42.177
|
2023-04-13T21:51:04.423
|
2023-04-13T21:51:04.423
|
36636
|
22413
|
[
"options",
"stochastic-processes"
] |
75203
|
2
| null |
75164
|
5
| null |
I think it's best if we go through the various terms that appear in your question and explain them one-by-one.
Derivative price: intuitively, a derivative price is what it costs to "create it".
Stock forward price: If somebody asks me to sell them an equity forward (i.e. to deliver a stock at some future date), I could "create it" by borrowing some money at an interest rate $r$ (assume continuous compounding) and buying the stock (at price $S_0$) that I am meant to deliver in the future. At the delivery time $t$, I will deliver the stock and will have to pay back the money that I borrowed ($S_0$) times the compounded interest rate: so $S_0e^{rt}$. This is also the price of the stock forward (exactly what it cost to "create it").
If the stock was yielding some dividends between the time I purchased the stock and delivery, I would get to keep those dividends, so you are correct to point out that the dividends would be subtracted from the forward price of the stock (remember, the derivative price is what it costs to make it, so if I get to keep the dividends, it costs me "less", that's why these are subtracted). If we assume continuous compounding of the dividends at rate $y$, then the forward price would be: $S_0e^{rt-yt}$
Stock future price: basically, the difference between a forward and a future is that a stock forward is traded OTC, whilst a future is exchange traded. There will be some subtle pricing differences related to the interest rates in the margining account on the futures exchange. Also, futures tend to be cash-settled. For simplicity, we can assume the same price as a stock forward.
Cost of carry: This is defined as the cost of holding a security or a physical commodity over a period of time. If I were to "create" a forward on oil with a physical delivery, I could borrow money again and buy the required amount of oil, but I'd need to store the oil somewhere: and this would cost money (storage, security). So it would cost more money in general to "create" a physical commodity forward (such as oil, grain, corn, etc.), compared to a stock forward.
Strictly speaking, a security, such as a stock, doesn't have any associated cost of carry. When you say that the future's value converges to the spot value, that's not a cost of carry: it just means that as the future gets closer to maturity, the "time" value of it approaches zero. In other words, the stock future has a negative Theta (sensitivity to remaining time to maturity): but this sensitivity is relatively small, compared to Delta: i.e. sensitivity to the underlying stock value.
Synthetic stock forward: as you rightly point out, the same pay-off as a stock forward can be obtained by being long a call and short a put (with identical strikes) on the same stock. If I am trying to replicate an "at-the-money" forward, the price of the bought call and the price of the sold put should be equal, so it should cost zero.
Price of a Forward contract: In the first paragraph, we discussed that if we agree today that I sell you a (non-dividend paying) stock in the future for the price of $S_0e^{rt}$, it should cost you zero money today to enter into this contract, because we agreed to trade the stock at the future date for a "fair price": exactly what it would cost me to make sure that I can deliver the stock in the future. This is analogous to buying a synthetic forward (a long call and a short put) for net-zero cost.
However, if you wanted to agree to buy the stock at a different price than the fair price (either higher or lower), the cost of the forward contract would be non-zero. Say the stock price today is $S_0$ but you wanted to buy it in the future for less than that: you would need to pay me (today's value) of the difference between the fair value of the stock future price and the specific price you want to buy it for in the future.
This is analogous to the synthetic-forward case where the price of the call is higher than the price of the put: it means that the call is struck in the money, whilst the put is struck out of the money.
Again, there is no real cost of carry here: you paid money upfront for the privilege to have exposure to a better upside in the future. Your main concern is not "the cost of carry" anyway, but the Delta sensitivity to the underlying stock: for each dollar that the underlying stock goes up, you make one dollar, and for each dollar that the underlying stock goes down, you lose one dollar. That's the same for the synthetic forward and the non-synthetic forward.
Interest rates: these generally have small effect on option prices (it's only noticeable on longer-dated options: calls would increase slightly in price, whilst puts would decrease). So it would become slightly more expensive to hold ITM call and be short OTM put. (But this increase in cost should be immaterial for any options with expiry less than 1-year).
Note: however as I show below, the option's sensitivity to rates is a function of the strike price, so if the underlying has a "high" spot value, such as Nasdaq (which has a spot of ~12,000, an unusually high spot compared to stocks), and therefore the strike is also relatively high, the impact of higher rates can be material: see section on Rho below.
Not sure this answers your question, but at least it might help clear up some of the terms.
## Zero Cost Synthetic Forward:
We want the long call and the short put to cost the same, for a given expiry $T$, today's underlying price $S_0$, interest rates $r$, and implied volatility $\sigma$: in other words, we want to solve for a strike $K$ in the following equation (within the B-S framework):
$$P(t_0, S_0, T, r, \sigma, K)=C(t_0, S_0, T, r, \sigma, K)$$
In other words:
$$e^{-rT}KN(-d_2)-S_0N(-d_1)=S_0N(d_1)-e^{-rT}KN(d_2)$$
Bringing all terms to the RHS:
$$0=S_0\left(N(d_1)+N(-d_1)\right)-e^{-rT}K\left(N(d_2)+N(-d_2)\right) \{eq. 1\}$$
Where:
$$d_1=\frac{\ln\left(\frac{S_0}{K}\right)+rT+0.5\sigma^2T}{\sigma\sqrt{T}}, d_2=\frac{\ln\left(\frac{S_0}{K}\right)+rT-0.5\sigma^2T}{\sigma\sqrt{T}}$$
Choosing $K=e^{rt}S_0$, we get (using the fact that $ln\left(\frac{S_0}{S_0e^rt}\right)=-e^{rt})$:
$$N(d_1)=N(0.5\sigma\sqrt{T})\approx0.5; N(-d_1)=N(-0.5\sigma\sqrt{T})\approx0.5$$
and also:
$$N(d_2)=N(-0.5\sigma\sqrt{T})\approx0.5; N(-d_2)=N(0.5\sigma\sqrt{T})\approx0.5$$
Going back to eq. 1:
$$0=^{?}S_0(0.5+0.5)-e^{-rT}e^{rt}S_0\left(0.5+0.5\right)=S_0-S_0=0$$
As required.
So we have shown that a long call and a short put, struck ATM, should cost very close to net-zero.
(btw, even if we don't use the approximation $N(-0.5\sigma\sqrt{T})\approx0.5$, by symmetry, we have that $N(-0.5\sigma\sqrt{T})+N(0.5\sigma\sqrt{T})=1$ in any case).
## Broker Fees:
If there is an initial broker fee to enter into a long Nasdaq future contract, this should not be confused with a cost of carry.
Buying and selling options at a broker will attract a bid-offer spread. This will mean that in practice, a long ATM call and a short ATM put will require an initial investment (even though the mid-prices should make the strategy cost zero): you will have to cross the spread twice (hitting broker's offer on the call and hitting broker's bid on the put).
Whether the initial cost of being long the future is higher than doing a synthetic long via options will depend on individual broker fees. But my guess would be that the future should be cheaper, because you only cross the bid-offer once, whereas in options, you cross it twice + options are probably less liquid than the futures (meaning a larger bid-offer).
## Rho - price sensitivity to rates:
As discussed above, the ATM futures price, $S_t$, is simply:
$$S_t=S_0e^{rt-yt}$$.
The sensitivity to interest rates, $\rho$, would be (the formula below gives sensitivity to 1% change in rates, that's why we have $\frac{1}{100}$ in the formula):
$$\rho=\frac{1}{100}\frac{\partial S_t}{\partial r}=\frac{1}{100}tS_0e^{rt-yt}$$
Assuming 3m expiry (therefore $t=0.25$), the future's sensitivity to 1% change in interest rates would therefore be $\frac{1}{400}$ of it's today's ATM value.
Concrete example:
Suppose today's price of Nasdaq is 12,000, interest rates are at 1%, and dividends are at zero (for simplicity). Then the ATM price of a futures contract expiring in 3 months would be: $$12,000e^{0.01*0.25}=12,030$$
Suppose rates go to 5% (current upper band of Fed Funds target rate); using the sensitivity formula, we get:
$$\rho=\frac{4}{400}12,030=120.3$$
So the futures ATM strike would have gone up by ~120 USD, i.e. from ~12,030 USD to ~12,150 USD.
What about the price of options?
We denote the option's sensitivity to rates also $\rho$. For calls, this is:
$$\rho(Call)=\frac{1}{100}KTe^{-Tr}N(d_2)$$
Above, we argued that for an ATM option, $N(d_2)\approx0.5$, and filling in all other variables, we get:
$$\rho(Call)=\frac{1}{100}12,030*0.25e^{-0.25*0.01}0.5\approx15$$
So for every increase in interest rates by 1%, the ATM call price would increase by roughly 15 USD (so an increase from 1% to 5% would mean a 60 USD increase).
Note that the Rho value for the put is the opposite of the Rho for the call:
$$\rho(Put)=-\frac{1}{100}KTe^{-Tr}N(d_2)$$
So for every 1% increase in rates, the put price would decrease by roughly 15 USD.
Delta sensi: lastly, let's ask the question of whether I can "save" on the "cost-of-carry" if, instead of doing a synthetic forward struck at 12,150 USD, I want to strike it at exactly today's value of Nasdaq spot (assume that's still 12,000 USD).
Earlier on above, we argued that for ATM calls, $N(d1)\approx0.5$. In fact, $N(d1)$ is the call-option delta.
For puts, the delta is $-N(-d1)$, so for ATM puts, the delta is the opposite of the call delta, i.e. -0.5.
If we want an easy "guestimate" of how much more expensive it will be to set up a synthetic forward struck at 12,000 instead of 12,150, that's easy using the delta.
For the call:
$$\delta(150)\approx0.5*150=75$$
For the put:
$$\delta(150)\approx-0.5*150=-75$$
So the call would get more expensive by 75 USD, if we move the strike from 12,150 to 12,000, whilst the put would get 75 USD cheaper.
Since for the synthetic long forward, we are long the call and short the put, the strategy would get more expensive by 150 USD: i.e. the exact amount I would "save" on the future's "cost-of-carry".
Which ever way we look at it, there is no free lunch.
Rule of thumb: if two strategies offer the same pay-off, they will always cost the same amount to set up (with broker fees & bid-offer spreads, there might be some differences, but in terms of "pricing at mid", the cost must be identical, otherwise there would be arbitrage).
| null |
CC BY-SA 4.0
| null |
2023-04-13T21:04:48.703
|
2023-04-20T10:50:33.443
|
2023-04-20T10:50:33.443
|
43861
|
43861
| null |
75204
|
1
|
75207
| null |
-1
|
74
|
I try to download market daily data after US markets close.
I found that the stocks in the following list have no data today.
```
LBTYB
WLYB
CSTA
SAGA
FSRX
HVBC
CMCA
ELSE
WSO-B
...
```
I try to find the OHLC data on TradingView and Yahoo Finance, but they don't have today's data for the stocks on the above list.
I did not find any news or announcement about those stocks. Why is that?
|
Why some stocks not traded today?
|
CC BY-SA 4.0
| null |
2023-04-13T21:40:44.167
|
2023-04-13T23:19:28.927
| null | null |
67019
|
[
"equities",
"market-data",
"historical-data",
"price"
] |
75205
|
1
| null | null |
1
|
20
|
- When a firm's default risk increases, the cost of debt obviously rises, which increases the WACC and decreases firm value. However, what happens to the cost of equity in this case? Has the proportion of debt in the firm's capital structure fallen (the value of debt has fallen)? This does not make much sense. Is it the case that expected cash flows to equity and therefore equity betas have changed, and the value of both equity and debt has fallen? Is default risk somehow reflected in equity beta (in some other way that is not the weight of debt, which says nothing about how risky that debt is)?
- It is often said that WACC starts increasing at some point with leverage as a result of the costs of financial distress. Given that these are costs, wouldn't it be more accurate to account for them in expected cash flows rather than the WACC? When WACC is said to increase as a result of these costs, is it the increase in the cost of debt or also the increase in the cost of equity? Or does the systematic risk of cash flows increase, hence increasing betas (of equity and debt)? In other words, expected cash flows decrease and the cost of capital increases. This is not double counting of risks in my view, but rather the possibility of calculating betas from cash flows.
|
In traditional asset pricing and valuation, why does the cost of equity increase with the AMOUNT of leverage but not with DEFAULT RISK?
|
CC BY-SA 4.0
| null |
2023-04-13T22:00:39.647
|
2023-04-13T22:00:39.647
| null | null |
65771
|
[
"asset-pricing",
"capm",
"valuation"
] |
75206
|
2
| null |
51041
|
-1
| null |
No. It means Trading At Settlement. Meaning orders on futures can be accepted on the settlement date.
| null |
CC BY-SA 4.0
| null |
2023-04-13T22:09:54.233
|
2023-04-13T22:09:54.233
| null | null |
67020
| null |
75207
|
2
| null |
75204
|
3
| null |
There's a common misconception that everything must trade every day / hour / minute when the market is open.
Quite simply, the price is in equilibrium - no new buyers willing to push the bid higher and no sellers wishing to push the ask lower.
There may be various reasons for this - there might be alternative classes of stock to trade (eg. LBTYA instead of LBTYB, WLY instead of WLYB etc.) or quite simply the stock just isn't very liquid (some of the SPACs you mentioned) or just has a low market cap (eg. HVBC is considered to be a Microcap).
If you seek liquidity you should instead consider stocks that are constituents of higher cap indices such as S&P 500, Russell 1000, etc.
| null |
CC BY-SA 4.0
| null |
2023-04-13T23:19:28.927
|
2023-04-13T23:19:28.927
| null | null |
13859
| null |
75208
|
2
| null |
55137
|
0
| null |
Supposing you put random points over a surface, over which the Density function is a terrain(hills,valleys) i.e.,you objective is go to a place where some information about your target is in the form of a peak, the sampling gives you an intution of the location of the target. Then you want to reduce the area of your sampling space as you gather more information on the target. If you adopt rejection sampling, you may sometimes miss the target peak during sampling. So you adopt to a method called importance sampling. For this you choose an appropriate PDF(even uniform pdf) and use the ratio of the heights of your PDF with the new selected PDF covering your space of sampling.
The process goes on as Predict based on information, and Update using this information. This is how Monte Carlo methods work.
| null |
CC BY-SA 4.0
| null |
2023-04-14T04:24:17.140
|
2023-04-14T04:24:17.140
| null | null |
67021
| null |
75209
|
1
| null | null |
0
|
68
|
For my analysis I need a measure of volatility for government bonds. On Bloomberg I could not find any good measure of volatility - they offer some measures which are based on the close prices of each day (that is, from each day only one number is used), which is not precise enough, since I would like to have a measure of volatility for every day (fluctuations during the day).
However, from Bloomberg I can get open price, close price, high price and low price. The difference between high price and low price could be a simple measure which I could use to distinguish between days with large and small fluctuations. But are there some other ways to use these four numbers to get an estimation of volatility for every day? Thanks!
UPDATE:
I can see in the comments that I can use Garman Klass volatility. I have tried to use in the package TTR, but a few things do not make sense to me.
If I use the example in the help file:
```
library(TTR)
data(ttrc)
ohlc <- ttrc[,c("Open","High","Low","Close")]
MyVolatility=volatility(ohlc, calc="garman.klass")
```
Then MyVolatility will have an estimate for each day. But
- How does it know which variables are which? I have tried to change the name of "Close" to another name, and it still gives the calculations. So, should the variables have this exact ordering?
- Does it automatically take the right parameters, like N or is there something I need to specify?
- In this example the first 9 estimates in the MyVolatility are NA. Does that mean that each day's volatility is based on 9 days and not on single day? I mean, would it make sense to use this estimate to determine whether day t is different from day t+1 in terms of fluctuations?
- Finally, is there a way for it to stop giving error if one of the numbers is missing? Like, in other functions one can use na.rm=T.
To sum up, I would like to know the following. Suppose I have a dataframe MyData with many variables including Date, High_Price, Low_Price, Open_Price, Close_Price, and I would like to mutate the volatility estimate for each day like in dplyr. What do I have to write in my code?
And what if I have different bonds in my dataset, is it possible to use dplyr with a group_by?
Thanks a lot!
|
A measure of volatility that uses open, close, low and high prices?
|
CC BY-SA 4.0
| null |
2023-04-14T11:16:18.163
|
2023-04-15T16:42:50.103
|
2023-04-15T16:42:50.103
|
62001
|
62001
|
[
"volatility"
] |
75211
|
2
| null |
74207
|
3
| null |
A tentative answer as a practitioner.
SABR is over-specified, meaning you could fit a market smile to any arbitrary, non-pathological $\beta$. The so-called redundancy between $\beta$ and $\rho$ is limited to geometrical considerations though. IE $(\beta=0,\rho_0,\nu_0)$ and $(\beta=1,\rho_1,\nu_1)$ may both fit the smile perfectly, but they will produce a different delta, which matters a lot when managing a portfolio.
Remember that volatility markets in the rates space have been using normal vol for the last 15 years more or less. So make sure you look at $\sigma_n$ and not $\sigma_b$ in the Hagan paper, or when plotting your backbone, because rates market participants just don't speak logvol anymore (unlike their equity cousins).
Having said that, a couple possible approaches to decide on $\beta$:
- Mark It Zero (as per Walter Sobchak, The Big Lebowski). The idea is that you know exactly what it means to have a flat backbone and no backbone delta. It might be wrong, but there's a benefit in having a model where the practitioner (trader or portfolio manager) knows the exact quantity of delta he's getting from the backbone. And he may overlay his own finger-up-in-the-air hedging to whatever his SAAR analytics tell him. Also has the benefit of being compatible with sub-zero rates/forwards, and not a horrible assumption for a low rates environment (in 2021/2022, significant overlay needed from the trader for proper hedging).
- Infer it from the log/log vol/rate regression over a carefully chosen rolling window of time. The benefit is it will match recent market dynamics. Drawback is it moves, meaning more maintenance is needed from both the quants, to ensure the parameters don't go out of whack and are somewhat smooth on the expiry/tenor grid, AND from the trader who needs to have a sense of the dynamics implied by the current $\beta$
Quick comment on Bartlett: fine to use when you cant hedge the vol, say in illiquid markets. But be wary of changing relationships and putting too much in your delta.
Hats off to AKdemy for posting Julia code. Julia is the quant language of tomorrow, and I encourage quants to pick it up now.
| null |
CC BY-SA 4.0
| null |
2023-04-14T12:38:49.820
|
2023-04-14T12:44:11.523
|
2023-04-14T12:44:11.523
|
67027
|
67027
| null |
75212
|
1
| null | null |
3
|
75
|
Consider the stochastic process
$$
dy = f(y,s)ds + g(y,s)dw
$$
where, $w$ is Brownian motion.
Now consider the following exponentiated integral
$$
z_1(s) = \exp \left[ - \int_t^s b(y(r),r) dr \right]
$$
This object appears in the Feynman-Kac formula and its derivation (see e.g. [Wikipedia](https://en.wikipedia.org/wiki/Feynman%E2%80%93Kac_formula#Partial_proof) and [page 3 of this lecture notes](https://math.nyu.edu/%7Ekohn/pde.finance/2011/section1.pdf)). The stochastic differential $d z_1$ is simply
$$
dz_1(s) = -z_1 b(y(s),s) ds
$$
i.e. this is effectively ordinary differentiation. I was expecting more terms to appear in the above differential due to Ito's lemma, which would give
$$
dz_1(s) = \left( \partial_s z_1 + f\partial_y z_1 + \frac{g^2}{2}\partial_y^2 z_1 \right) ds + (g \partial_y z_1) dw
$$
and the partial derivatives wrt $y$ would have acted on the integrand $b(y(r),r)$. However, it seems only the $(\partial_s z_1) ds$ term in the RHS above is being retained. Can anyone explain why?
|
Feynman-Kac formula: Ito's lemma for exponentiated integrals $e^{-\int b dr}$
|
CC BY-SA 4.0
| null |
2023-04-14T12:54:37.003
|
2023-04-14T14:17:22.257
|
2023-04-14T14:17:22.257
|
67028
|
67028
|
[
"stochastic-processes",
"stochastic-calculus",
"brownian-motion"
] |
75213
|
1
| null | null |
1
|
33
|
I am currently researching the joint calibration problem of SPX and VIX. The idea is that: VIX options are derivatives on the VIX, which itself is derived from SPX options and should thus be able to be jointly calibrated.
Looking at European markets there is the STOXX50 and VSTOXX indices both of which are traded. However, on the VSTOXX index only options on futures on the VSTOXX itself are traded.
My question is, using the reasoning behind the SPX/VIX joint calibration, shouldn't the STOXX50 and VSTOXX be jointly calibrated, only that the VSTOXX instruments are options on futures?
|
STOXX50 and VSTOXX joint calibration
|
CC BY-SA 4.0
| null |
2023-04-14T16:40:36.067
|
2023-04-14T16:40:36.067
| null | null |
33872
|
[
"option-pricing",
"volatility",
"implied-volatility",
"calibration"
] |
75214
|
1
| null | null |
6
|
230
|
This is a general question that applies to the CAPM and any version of the APT (e.g. the Fama & French three factor model). Speaking in terms of the APT:
Assuming a simple one-index version of the APT I have:
\begin{equation}
R_i = \alpha_i + \beta_{1,i}f_1 + \epsilon_i,
\end{equation}
where, for each asset $i$, $R$ denotes the return, $\alpha$ denotes a constant, $\beta_1$ denotes the factor loading on factor $f_1$ and $\epsilon$ denotes the idiosyncratic error.
It is well known and easy to proof that this implies:
\begin{equation}
E(R_i) = r_f + \beta_{1,i}\lambda_1,
\end{equation}
where $\lambda_i$ denotes the risk premium associated with the corresponding factor.
Now, this clearly states that I can predict the expected value of an asset return cross-sectionally, that is in the same period of my factor, as well as factor loading realization. There is no subscript $t$! Nonetheless, models such as the APT are commonly used to predict the next periods returns, i.e.:
\begin{equation}
E(R_{i,t+1}) = r_{f,t} + \beta_{1,i,t}\lambda_{1,t}.
\end{equation}
My question: Why can I predict returns in $t+1$ with the model - the original APT does relate to expectation within a cross-section? Going from formula 2 to formula 3 necessarily implies that one assumes the factor loadings are constant across $t$ to $t+1$. This is not a reasonable assumption in my opinion.
The only explanation I can come up with:
Usually the $\beta_{1,i}$ is estimated via time-series regressions. When sticking to formula 2, this necessarly implies that I use $R_{i,t}$ when estimating $\beta_{1,i}$ and when estimating $\lambda_i$. Put differently, my LHS variable in step one is implicitly part of my RHS variable in step two (as it is estimated based on it) - that makes limited sense, probably. When using the expected future return relation in the third formula, I only use $R_{i,t}$ when estimating $\beta_{1,i}$. Hence, formulating it like this empirically is cleaner.
EDIT: To add to my point: Consider the [Cochrane 2011](https://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.2011.01671.x) JF Presidential Adress. On page 1059 he mentions the FF model, relating expected returns in $t$ to factors in $t$. On page 1062 he then goes on to say "More generally, “time-series” forecasting regressions, “cross-sectional” regressions,
and portfolio mean returns are really the same thing. All we are
ever really doing is understanding a big panel-data forecasting regression,
\begin{equation}
R^{ei}_{t+1}=a+b'C_{it}+\epsilon^i_{t+1}.
\end{equation}
This is exactly what I am finding confusing: How is the cross sectional regression he explicitly formulates earlier, the same as this prediction regression? It is one thing to talk about expected returns in $t$, as the theory on cross-sectional variation does, and another thing to talk about expected returns in $t+1$.
|
Why can I use equilibrium asset pricing models to predict future returns?
|
CC BY-SA 4.0
| null |
2023-04-14T19:19:46.833
|
2023-04-18T07:32:42.123
|
2023-04-17T09:38:13.307
|
44219
|
44219
|
[
"returns",
"factor-models",
"asset-pricing"
] |
75217
|
1
| null | null |
0
|
55
|
How to read the notation used for the swap rates in Table 20.6 below?
What does 2.412/452 means?
[](https://i.stack.imgur.com/55QVP.png)
|
How to read the notation used for the swap rates in the form 4.412/452 for the 1 year swap rate?
|
CC BY-SA 4.0
| null |
2023-04-14T22:51:04.933
|
2023-05-18T09:04:37.550
|
2023-05-18T09:04:37.550
|
848
|
67037
|
[
"derivatives",
"swaps",
"interest-rate-swap"
] |
75218
|
1
|
75219
| null |
0
|
47
|
Sorry for this dumb question but what are the mathematical-finance /academic conventions for calculating portfolio weights in a long/short portfolio where the longs are fully funded by the shorts? Note that I am asking for the 'academic' conventions, I'm aware that in reality one needs to hold margin etc.
Suppose I am long USD200 of Apple, and I have shorted USD200 of Google stock. What is the portfolio weight of apple and google?
- w_apple = 200/200 = 1 and w_google = -200/200 = -1?
- w_apple = 200/(200 + abs(-200)) = 0.5 and w_google = -200/(200+abs(-200)) = -0.5?
- something else?
Thanks
|
calculating portfolio weight for long short
|
CC BY-SA 4.0
| null |
2023-04-15T00:34:01.633
|
2023-04-16T19:42:39.047
| null | null |
17316
|
[
"portfolio",
"longshort",
"net-asset-value"
] |
75219
|
2
| null |
75218
|
1
| null |
The first one. Your net weight is zero. This is a self financing strategy. Think of it this way: if apple goes up by 10% and google goes down by 5% your return will be:
$r_p = 1 \times 10\% - 1 \times (-5\%)= 15\%$
| null |
CC BY-SA 4.0
| null |
2023-04-15T01:51:52.720
|
2023-04-16T19:42:39.047
|
2023-04-16T19:42:39.047
|
16472
|
16472
| null |
75221
|
2
| null |
75217
|
2
| null |
That is 2.412 bid, 2.452 offered. A 4bp wide market.
| null |
CC BY-SA 4.0
| null |
2023-04-15T06:00:05.093
|
2023-04-15T06:00:05.093
| null | null |
18388
| null |
75224
|
1
| null | null |
0
|
68
|
>
In the CRR model, describe the strategy replicating the payoff
$X=(S_T-K)^{ +} +a(K-S_{T-2})^{+ }$ for $a \neq 0$
$X$ consists of two parts:
- European call option with strike price $K$ and expiration date $T$
- $a$ European put options with strike price $K$ and expiration date $T-2$
So I think I should replicate these two parts separately, but I don't know how to do that.
|
In the CRR model, describe the strategy replicating the payoff $X=(S_T-K)^{ +} +a(K-S_{T-2})^{+ }$ for $a \neq 0$
|
CC BY-SA 4.0
| null |
2023-04-15T09:37:45.223
|
2023-04-15T17:11:19.717
| null | null |
67042
|
[
"options",
"european-options",
"option-strategies",
"replication",
"strategy"
] |
75225
|
1
| null | null |
5
|
83
|
I'm working through the derivation of Hagan's formula (Hagan et al, 2002) for the implied volatility of an option in the SABR model. I'm finding it pretty confusing. Most of my hang-ups are coming from the singular perturbation expansion. Specifically, how does one use singular perturbation expansion to find the solution of,
$$
P_\tau = \frac{1}{2}\varepsilon^2\alpha^2C^2(f)P_{ff} + \varepsilon^2\rho\nu\alpha^2C(f)P_{f\alpha} + \frac{1}{2}\varepsilon^2\nu^2\alpha^2P_{\alpha\alpha}
$$
to be,
$$
P = \frac{\alpha}{\sqrt{2\pi\varepsilon^2C^2(K) \tau }} e^{-\frac{(f-K)^2}{2\varepsilon^2\alpha^2C^2(K)\tau}}\{1+...\}.
$$
If anyone has any resources/textbooks that they think might help that would be greatly appreciated!
|
In-depth derivation of implied volatility in the SABR model
|
CC BY-SA 4.0
| null |
2023-04-15T10:23:57.110
|
2023-04-15T21:32:04.703
|
2023-04-15T21:32:04.703
|
66505
|
67044
|
[
"implied-volatility",
"sabr",
"derivation"
] |
75226
|
1
| null | null |
1
|
54
|
Let's say I want to optimise allocations between strategies in a multi-strategy fund.
There are 3 strategies, and the CIO want me to solve the portfolio that has 50% of risk in 1st strategy, 40% in second, and 10% in third. This is the standard Roncalli stuff (average risk contributions).
This seems to work for a certain range of covariances. But if the third strategy is sufficiently negatively correlated with the first two, a solution is not possible.
How do you deal with this? Presently, I just simply scale all three strategies to have equal volatility on a standalone basis, and then apply the 50/40/10 allocation weights instead of risk weights. But this does not feel quite right.
|
Risk Budgeting with negative covariance
|
CC BY-SA 4.0
| null |
2023-04-15T10:28:32.537
|
2023-04-15T10:28:32.537
| null | null |
67043
|
[
"portfolio-optimization"
] |
75227
|
2
| null |
75224
|
1
| null |
Do you know how to use the CRR model for a standard Call and a Put? If not, you can learn that from a textbook or this site. If you know how to:
- Find the replicating portfolio for the Call
- Find the replication portfolio for the Put
- Add the weights together, multiplying the weights of the put by $\alpha$. For the final two days, set the weight of the Put-replicating portfolio to 0.
| null |
CC BY-SA 4.0
| null |
2023-04-15T15:08:09.530
|
2023-04-15T17:11:19.717
|
2023-04-15T17:11:19.717
|
848
|
848
| null |
75229
|
1
| null | null |
1
|
34
|
Training a language model for taking bets in the stock market based on news. For each news release I have three categories for each stock: long bet, short bet, or neither. The NN outputs probabilities for each category.
I'm having a surprisingly hard time getting a good calibration for the probabilities. If I use EV as the loss function, there's no reason to ever not take a bet, no matter how small your opinion over the signal is, because you can set
P(long) = 0.5, P(short) = 0.5
for a very slightly positive EV (because once in a while the price hits both hypothetical profit takers without hitting the stop losses).
On the other hand I use expected utility as the loss, with utility = log(profit). This way a bet with EV close to 0 and positive risk has negative expected utility. However, as I tweak the size of the bet, the NN goes straight from ignoring the risk to never betting with very slight changes in bet size.
I could hunt for the perfect value, but I don't like how fragile results are to a hyperparameter choice.
How do I get the model to only bet when it's confident?
|
Having a hard time getting sentiment analysis bot to calibrate bet probabilities well
|
CC BY-SA 4.0
| null |
2023-04-15T22:02:35.117
|
2023-04-15T22:07:01.853
|
2023-04-15T22:07:01.853
|
67051
|
67051
|
[
"machine-learning"
] |
75231
|
1
| null | null |
1
|
58
|
In the mean-variance framework, the only way to get a higher expected return is to be exposed to a higher beta, and the more risk-averse an agent, the lower the beta of their portfolio (lending portfolios). However, could it be that a more risk-averse individual has a higher discount rate than a less risk-averse individual (i.e., the more risk-averse individual is less keen to provide financing for a particular venture). Or does the risk-return trade-off need to hold in all models that assume rationality and market efficiency (ie., higher expected returns are only achieved by higher risk exposure, given a certain level of aggregate risk aversion as in the mean-variance framework)?
|
Beyond the mean-variance framework, can expected returns be HIGHER for an individual due to a HIGHER risk aversion?
|
CC BY-SA 4.0
| null |
2023-04-16T15:35:09.140
|
2023-05-17T10:14:56.693
| null | null |
65771
|
[
"asset-pricing",
"mean-variance",
"utility-theory"
] |
75232
|
1
| null | null |
0
|
23
|
Since different brokers and data providers might apply their own additional offsets to the spreads of the same instruments, what are some concerns that might rise if we merge their data to create a longer historical data to backtest on.
i.e concatenating tick data of one period from data provider 1 to the tick data from data provider 2 for the missing periods in data provider 2.
|
Using different historical data from different brokers during backtesting
|
CC BY-SA 4.0
| null |
2023-04-16T15:44:49.347
|
2023-04-16T15:44:49.347
| null | null |
67058
|
[
"historical-data",
"backtesting",
"tick-data"
] |
75234
|
2
| null |
2065
|
1
| null |
I don’t understand these answers — raw tick (say ITCH) data has MPIDs (Market Participant Identifier) associated with orders…
It’s certainly possible to ‘tag’ MPID’s in equities, or GFID’s (Globex Firm IDs) in futures, and determine who is who to some degree.
From retail — no, you’re mostly just dealing with quotes from venues at greatest degree of granularity there.
We get our data from MayStreet Inc. (now part of Refinitiv), it has MPID on every single ‘event’.
| null |
CC BY-SA 4.0
| null |
2023-04-16T19:46:21.323
|
2023-04-17T07:35:42.290
|
2023-04-17T07:35:42.290
|
16148
|
67062
| null |
75236
|
1
| null | null |
2
|
100
|
I am studying a course and I am a bit confused on how to find the a bonds $\sigma$. My course mentions the following:
Once calculated the expected returns on the bond $\mathrm{E}(r_d)$, we can
calculate the debt beta:
$\mathrm{E}(r_d) = r_f + \beta_d(\mathrm{E}(r_M) − r_f))$
$\beta_d = \mathrm{E}(r_d − r_f)/\mathrm{E}(r_M − r_f)$
where $r_f$ and $r_M$ are the usual risk-free rate and market portfolio and where:
- $\mathrm{E}(r_d)$: expected return on debt
- $r_f$: return on riskless debt
- $(\mathrm{E}(r_M) − r_f))$: return on equity market portfolio
If we know the Beta of debt, we can estimate the volatility of
debt:
$\beta_d = \rho(r_M, r_d) \times (\sigma_d/\sigma_M)$
$\sigma_d = (\beta_d \times \sigma_M)/\rho(r_M, r_d)$
There is still an assumption that needs to be made between the correlation between the market portfolio and debt. But I believe this is negative. If I use this formula and the $\beta_d$ is positive the $\sigma_d$ will be negative and this doesn't make sense to me. The only way that $\sigma_d$ is positive is if the $\beta_d$ is negative. Am I wrong? Can someone please help me understand? Thanks
|
Risk of bond calculation
|
CC BY-SA 4.0
| null |
2023-04-16T21:18:02.610
|
2023-04-27T11:30:21.117
|
2023-04-27T11:30:21.117
|
5656
|
67064
|
[
"fixed-income",
"risk"
] |
75238
|
1
| null | null |
0
|
5
|
I am doing reseaerch at the University of Minnesota (where I teach) for a class in investment banking, using YRIV as a case study. How can I find intra-day trading information (OHLC and volume) for this stock, for specific dates in 2018? Thank you.
|
intra-day trading information (OHLC and volume) for YRIV for specific dates in 2018
|
CC BY-SA 4.0
| null |
2023-04-16T22:08:35.213
|
2023-04-16T22:08:35.213
| null | null |
67066
|
[
"equities"
] |
75239
|
2
| null |
75164
|
5
| null |
Both answers already address the gist of the question. I decided to add (quite) some details because I think there is some confusion from the OP. It is not the future that has carry costs or benefits but the underlying. This is also an important fact for pricing options. However, there is no additional "feature" to the cost of carry that is not already incorporated in the forward pricing if you replicate a forward synthetically. Any deviation from the fair future (zero cost as no one is made better or worse off) will result in a compensation of one counterparty. That's it. The rest below just elaborates on this.
The following screenshot displays for various different implied volatilities (IV), the associated call (c) and put (p) values computed with Black Scholes (with cost of carry), Black76 (without cost of carry but using the forward / future as the underlying), the fair forward / future (Fwd), the strike used, the price of the synthetic long future/forward (C-P), and the compensation for not using the fair forward /future $((F-K)*e^{-r*t})$, as well as the Put & Call values computed with put call parity (PCP). As you can see, forwards and synthetic forwards always match each other in terms of pricing and the cost of carry is always the same. The code and explanations will follow below.
As you can see, all that matters is where K is relative to the fair future. The option prices always adjusts for this and simply resemble the price of the forward /future (as should be, otherwise there would be arbitrage).
[](https://i.stack.imgur.com/1hGrA.png)
Details
The Black Scholes Merton (BSM) and [Black 76](https://en.wikipedia.org/wiki/Black_model) option pricing models are both well-known and widely used. The only model difference versus the BSM model is that the underlying future in the Black model has no carry costs or benefits.
In words, the cost of carry relationship describes the relative cost of buying a stock with deferred delivery (the future) versus buying it in the spot market with immediate delivery and "carrying" it forward. If you buy stock now, you tie up your funds and incur a time value of money cost of $r$ per period. On the other hand, you receive dividend payments (carry benefit) of $d$. (Technically, being short spot inverses this and the cost of carry is the cost of paying dividends).
This advantage must be offset by a differential between the futures and the spot price. Therefore, OPs comment made in @Jan Stuller's answer makes no sense:
>
If it were immaterial everyone would buy ATM calls, sell ATM puts, and
sell futures. Net price movement of such a strategy is zero, yet
selling futures earns one (interest rate - dividend yield) every year.
Which would be free money that I could leverage as high as my broker
would let me.
The future price is exactly offsetting this difference and there is no free money. It may be less obvious with equity but should be quite clear with FX (where the concept is identical, just with two interest rates). It is called [Covered Interest Parity](https://quant.stackexchange.com/a/71012/54838) (CIP).
[](https://i.stack.imgur.com/w8Aze.png)
No matter what you do, returns from investing domestically are equal to the returns from investing abroad. This works because you enter a forward and fix that rate that guarantees no arbitrage.
Now, back to options, let’s begin with BSM which has the carry costs (and benefits) of the underlying incorporated. After all, the model is for European options (hence deferred delivery) just like equity futures, but it has the spot market as the underlying (like equity futures). Writing BSM in [Julia](https://julialang.org/) looks like this (I like to use it because the code looks almost like a math textbook and the language offers simple, yet powerful plotting libraries paired with speed):
```
using Distributions
N(x) = cdf(Normal(0,1),x)
function BSM(s,k,t,r,d, σ, cp)
d1 = ( log(s/k) + (r -d+ σ^2/2)*t ) / (σ*sqrt(t))
d2 = d1 - σ*sqrt(t)
opt = exp(-d*t)*cp*s*N(cp*d1) - cp*k*exp(-r*t)*N(cp*d2)
delta = cp*exp(-d*t)*N(cp*d1)
return opt, delta
end
```
CP is a call put flag (1 for call, -1 for put). Ignoring it, one has the following formula:
$$e^{-d*t}*S*N(d1) - K*e^{-r*t}*N(d2)$$
where $d$ is the dividend, $r$ is the risk-free rate and $d1$, $d2$ are the standard BSM model inputs as shown on [Wikipedia](https://en.wikipedia.org/wiki/Greeks_(finance)#Formulas_for_European_option_Greeks). To highlight the carry benefit adjustment in the BSM model, one can rewrite the call and put value as the present value (PV) of the expected option payoff at expiration.
$$E(c_T) = \color{blue}{S*e^{(r-d)*T}}*N(d1) - K*N(d2)$$
and
$$E(p_T) = K*N(-d2) - \color{blue}{S*e^{(r-d)*T}}*N(-d1)$$
It should be clear now why the cp flag in the BSM function in the Julia code works.
This also shows nicely that the BSM model value is really just a dynamically managed portfolio of the stock and zero-coupon bonds (the financing part, which can also be seen as bank borrowing or lending). The discounted price of the zero coupon bond is $K*e^{-r*T}$, the stock itself is influenced by the carry benefits (cb) and high cb will lower the call option price. In summary, for calls, one needs to by N(d1) stocks (adjusted for the carry benefit) and N(d2) bonds. Since N(d2) < 0 and N(d1) > 0 one needs to borrow to buy the stock.
Now, let's plug in some hypothetical numbers. I'll use t = 1 year throughout to avoid complications with daycount differences between rates, dividends, IV and so forth.
```
using DataFrames
s,k,t,d,r,σ = 100, 100, 1,0.03, 0.04, 0.3
call = BSM(s,k,t,r,d,σ,1)
put = BSM(s,k,t,r,d,σ,-1)
df = DataFrame("Call" => call[1],"Delta Call" => call[2], "Put" => put[1], "Delta Put" => put[2])
PrettyTables.pretty_table(df, border_crayon = Crayons.crayon"blue", header_crayon = Crayons.crayon"bold green", formatters = ft_printf("%.4f", [1,2,3,4]))
```
[](https://i.stack.imgur.com/YoFuL.png)
This is ATM Spot, hence the resulting synthetic forward would not be zero cost. Nonetheless, the (carry benefit adjusted) put-call parity, defined as
$$p + S*e^{-d*t} == c + e^{-r*t}*K$$
works, as it does for any strike.
```
println("PC Parity computed Put value = $(round((c + exp(-r*t)*k -s*exp(-d*t)),digits = 4))")
println("Put Price according to BSM = $(round(put[1],digits = 4))")
```
PC Parity computed Put value = 11.0371
Put Price according to BSM = 11.0371
If we want to price a synthetic zero cost forward we need to first compute the fair future value of the stock, $S*e^{(r-d)*T}$. We can put this argument one step further. I claimed that Black 76 has no carry costs or benefits to take care of, because the future is already computed with the carrying cost of the spot "in mind". Let's define Black in Julia:
```
function Black76(F,K,t,r,σ, cp)
d1 = (log(F/K) + 0.5*σ^2*t)/ σ*sqrt(t)
d2 = d1 - σ*sqrt(t)
opt = cp*exp(-r*t)*(F*N(cp*d1) - K*N(cp*d2))
return opt
end
```
Rates or dividends show up nowhere, apart from discounting the expected payoff back to today. Interesting side remark: futures contracts are marked to market and so the payoff is realized when the option is exercised. If we would consider an option on a forward contract expiring at time T̃ > T, the payoff doesn't occur until T̃. Thus, the discount factor would need to take this extra time into account.
Combining this into a DF shows that this indeed yields the desired output:
```
k = s*exp((r-d)*t)
f = k
call = BSM(s,k,t,r,d,σ,1)
put = BSM(s,k,t,r,d,σ,-1)
call_Black = Black76(f,k,t,r,σ,1)
put_Black = Black76(f,k,t,r,σ,-1)
df = DataFrame("Call" => call[1],"Delta Call" => call[2], "Put" => put[1], "Delta Put" => put[2], "Forward" => k, "Call Black76" => call_Black, "Put Black76" => put_Black)
PrettyTables.pretty_table(df, border_crayon = Crayons.crayon"blue", header_crayon = Crayons.crayon"bold green", formatters = ft_printf("%.4f", [1,2,3,4]))
```
[](https://i.stack.imgur.com/sF0af.png)
Now, we do have a zero-cost synthetic forward. Cost of carry play no further role here, besides defining the fair future value (or strike). $\color{blue}{Deviating\ from\ this\ fair\ price\ will\ just\ mean\ that\ you\ pay\ or\ receive\\ an\ upfront\ compensation\ and\ it\ is\ not\ zero\ cost\ at\ initiation.}$
However, with equity options you have another problem. Many stock options are American. As such, your position may be subject to early exercise. here are 2 circumstances that can lead to the value of an European option being lower than intrinsic value
- a) deep ITM puts in presence of positive interest rates r>0
- b ) deep ITM calls in presence of positive dividend yield q>0
which also coincides with the 2 circumstances under which it makes sense for an American option to be exercised early (which can matter for synthetic equity forwards). Some intuition is given [here](https://quant.stackexchange.com/questions/73613/can-european-call-option-on-stock-have-positive-theta-assume-positive-interest/73629#73629), where the graphic below is taken from.
[](https://i.stack.imgur.com/sx4mh.gif)
Any area that is shaded (to the right) will mean early exercise for American options. Insofar, you do not really own a synthetic forward, because if spot moves significantly, one of your legs will be terminated early. On top of that, these options are frequently a lot less liquid compared to the future and as Jan Stuller wrote, you will have two transactions instead of one.
Edit
I highlighted the formula in blue to show that cost of carry is fully integrated into BSM. I mentioned that Black (for pricing option on futures) does not need this cost of carry adjustment because there is no cost or carry for futures. I wrote that cost of carry plays no further role besides determining the fair future (or strike). I added all code and numerical examples. Now, all that is left is to try out a few different values to see that there is indeed no such thing as moneyness or delta adjusted carry. The cost of carry is simply a no arbitrage formula that means no one is made better or worse - hence there is no upfront cost.
Let's look at a numerical example where we do not use the fair strike. As shown above, for ATM spot, the value for a call and put are not identical and the options cost 12.0027 and 11.0371 respectively, hence a total cost of ~0.9656. A shown, the fair forward is $f = s*exp((r-d)*t) \approx 101.005$. Discounting the difference (there is always time value for everything) gives you $(f-k)*e^{-r*t} \approx 0.9656$. That it matches the difference between the call and put is no coincidence, but a simple correction for the unfair (synthetic) forward you enter. This works with any value of interest rates, dividends, IV and whatever else concerns option pricing (as long as you we are talking about European options). Below is the (somewhat messy) code, that creates the DataFrame from the very top. The highligthed lines indicate where K changes (and IV starts to iterate from the beginning).
```
# define range of volatilities
σ = 0.1:0.1:1
#compute fair forward
f = s*exp((r-d)*t)
# try a few different strikes (ATM, OTM, ITM)
k,k2,k0 = 102,98,f
# define Black Scholes for the strikes
call, call2, call0 = BSM.(s,k,t,r,d,σ,1),BSM.(s,k2,t,r,d,σ,1), BSM.(s,k0,t,r,d,σ,1)
put, put2, put0 = BSM.(s,k,t,r,d,σ,-1), BSM.(s,k2,t,r,d,σ,-1), BSM.(s,k0,t,r,d,σ,-1)
call_Black,call_Black2, call_Black0 = Black76.(f,k,t,r,σ,1), Black76.(f,k2,t,r,σ,1), Black76.(f,k0,t,r,σ,-1)
put_Black, put_Black2, put_Black0 = Black76.(f,k,t,r,σ,-1), Black76.(f,k2,t,r,σ,-1), Black76.(f,k0,t,r,σ,-1)
# create result arrays
vols = append!(append!([σ[i] for i in 1:1:length(σ)],[σ[i] for i in 1:1:length(σ)], [σ[i] for i in 1:1:length(σ)]))
c = append!(append!([call0[i][1] for i in 1:1:length(σ)], [call[i][1] for i in 1:1:length(σ)], [call2[i][1] for i in 1:1:length(σ)]))
p = append!(append!([put0[i][1] for i in 1:1:length(σ)], [put[i][1] for i in 1:1:length(σ)],[put2[i][1] for i in 1:1:length(σ)]))
c_black = append!(append!([call_Black0[i][1] for i in 1:1:length(σ)], [call_Black[i][1] for i in 1:1:length(σ)],[call_Black2[i][1] for i in 1:1:length(σ)]))
p_black = append!(append!([put_Black0[i][1] for i in 1:1:length(σ)], [put_Black[i][1] for i in 1:1:length(σ)],[put_Black2[i][1] for i in 1:1:length(σ)]))
# create dataframe
df = DataFrame("IV" => vols ,
"C" => c, "P" => p,
"C Black" => c_black, "P Black" => p_black,
"Fwd" => f,
"Strike" => append!(append!([k0 for i in 1:1:length(σ)], [k for i in 1:1:length(σ)],[k2 for i in 1:1:length(σ)])),
"C - P" => [round(c[i][1].-p[i][1],digits =3) for i in 1:1:length(vols)],
"(F-K)*e^(-r*t)" => append!(append!([round((f-k0)*exp(-r*t),digits = 3) for i in 1:1:length(σ)], [round((f-k)*exp(-r*t),digits = 3) for i in 1:1:length(σ)],[round((f-k2)*exp(-r*t),digits =3) for i in 1:1:length(σ)]))
)
## put call parity computations for call and put
df[!, "PCP C"] = round.(df.P .- exp(-r*t).*df.Strike .+ s*exp(-d*t), digits = 4)
df[!, "PCP P"] = round.(df.C .+ exp(-r*t).*df.Strike .- s*exp(-d*t), digits = 4)
# PrettyTables formatting
hl_1 = Highlighter((data,i,j) -> data[i,1] == 0.100, crayon"bg:dark_gray white bold")
h2 = Highlighter( (data,i,j)->j in (8, 9) && data[i, j] == 2.887,
bold = true,
foreground = :blue )
h3 = Highlighter( (data,i,j)->j in (8, 9) && data[i, j] == 0.000,
bold = true,
foreground = :green )
PrettyTables.pretty_table(df, border_crayon = Crayons.crayon"blue", header_crayon = Crayons.crayon"bold green", formatters = ft_printf("%.3f", [1,2,3,4,5,6,7,8,9,10,11]), highlighters = (hl_1, hl_value(-0.956), h2,h3))
```
Julia Ad-on
Unrelated to the question, but the animation is pure Julia code. The sliders (and much more) can be created with [Interact](https://github.com/JuliaGizmos/Interact.jl). A nice demo (in my opinion, which may be biased because I wrote it) is the very short code below, which plots interactive 3D surfaces of the call value and various greeks in spot and time dimension. As long as Black Scholes is defined (as above just with more Greeks), the actual chart is just 7 lines of code. The quality is reduced here because the allowed GIF size is very small in imgur.
```
gui = @manipulate for K=K_range, rf=rf_range,d=d_range,σ = 0.01:0.1:1.11,α=0.1:0.1:1, side = 10:1:45,up = 20:2:52;
z = [Surface((spot,time)->BSM.(spot,K,time,rf,d,σ)[i], spot, time) for i in 1:1:6]
title = ["Call Value", "Vega","Delta","Gamma","Theta","Rho"]
p = [surface(spot,time,z[i], camera=(12,20),α=0.8 ,xlabel="Spot",ylabel="time",title=title[i],legend = :none) for i in 1:1:6]
plot(p[1],p[2],p[3],p[4],p[5],p[6],layout=(3,3), size =(1000,800))
end
@layout! gui vbox(vbox(hbox(K,rf,d,σ),hbox(α,side,up)), observe(_))
```
[](https://i.stack.imgur.com/yck2s.gif)
In case anyone is interested in Julia, I did some work a while ago which I partially shared on [Econ Stack](https://economics.stackexchange.com/a/50486/37817) to showcase why Julia is not slowing you down (unless your code is written in non-performant ways).
| null |
CC BY-SA 4.0
| null |
2023-04-16T22:44:14.117
|
2023-04-19T20:54:34.747
|
2023-04-19T20:54:34.747
|
54838
|
54838
| null |
75241
|
1
|
75243
| null |
0
|
51
|
We have an asset with the price process $S_t$, $0\leq t\leq T$
Further we have a zero coupon bond, and the price of it at time $t$ is denoted by $P(t,T)$ (with payoff 1 at time T). Let $F_{(t, T )}[S]$ denote the forward price, I need to argue that the following relationship holds:
$F_{(t, T )}[S] =\frac{S_t}{P(t,T)}$
Could someone please help me with the argumentation?
|
forward rate/zero coupon
|
CC BY-SA 4.0
| null |
2023-04-17T01:33:01.797
|
2023-04-17T02:46:18.913
| null | null |
67068
|
[
"bond",
"forward-rate"
] |
75242
|
1
|
75245
| null |
2
|
155
|
I fit a GARCH(1,1) model on the spread of 2 correlated assets :
[](https://i.stack.imgur.com/0vDwPm.png)
the GARCH model shows this summary:
```
==========================================================================
coef std err t P>|t| 95.0% Conf. Int.
--------------------------------------------------------------------------
omega 0.2066 5.839e-02 3.537 4.042e-04 [9.211e-02, 0.321]
alpha[1] 0.6416 5.479e-02 11.712 1.107e-31 [ 0.534, 0.749]
beta[1] 0.3584 6.020e-02 5.953 2.640e-09 [ 0.240, 0.476]
==========================================================================
```
From this point, nothing weird, but then when I plot the standardized residual by their conditional volatility :
[](https://i.stack.imgur.com/cDfUjm.png)
In order to retrieve entry/exit signals for my strategy, I'm doing a 2-tails test on the distribution of these standardized residual. However, as you can see, the distribution is very weird :
[](https://i.stack.imgur.com/X2bRim.png)
Is it normal to have such bimodal distribution for a standardized residual by a GARCH model ? I'm asking because this is definitely not something I was expecting (standard normal distribution, or at least t-student with fatter tails), neither something I found on the Internet as what we can expect for a GARCH std residual.. What did I miss here ?
|
Standardized residual by GARCH model shows bimodal distribution, is it normal?
|
CC BY-SA 4.0
| null |
2023-04-17T01:36:25.817
|
2023-04-17T07:33:15.497
| null | null |
63143
|
[
"garch"
] |
75243
|
2
| null |
75241
|
1
| null |
I think there are two things to get straight for the correct intuition. Understand what forward price (1) and $P(t, T)$ with pay-off 1 at T (2) means. Let's begin with the forward price:
$F_{(t,T)}[S]$ means what would be the price of the stock $S$ at time $t$ given that you agree to make a deal prior of the asset's transfer at time $T$. At least I would assume this for clarity, correct if I am wrong.
Now the 2nd building block: zero-coupon bond that pays $FV=1$ at $T$. Since we are so lucky that we have $FV=1$, we can treat the $P(t, T)$ as a discount factor. If the $P(t, T)$ evaluates, say to $.8$. That means that 1 dollar at time $T$ is worth 0.8 dollars at time $t$.
Let's now put everything together. Usually forward prices for assets are quoted followingly:
$F_0=S_0e^{rT}$, where $F_0$ is the forward price of a stock given its price $S_0$. Then the $S_0$ is multiplied the inverse of the $F_0$'s discount factor or by continuous compounding w.r.t rate of return $r$ of $S_0$ and time $T$ to arrive at the forward contracts value. See if you can connect the points.
So in your case the $P(t, T)=e^{rT}$ and essentially $P(t, T)<1$ making it useful to interpret $F_{(t,T)}[S]=S_tP(t,T)^{(-1)}$. You can verify this by putting some numbers for the $P(t,T)$ since it works here as inverse of the discount factor as it's essentially compunding the value of the stock $S$ at time $t$ to its value at $T$ that is the price of the forward contract for the stock.
Disclaimer: the reason why I am not putting any more formulas for the variables is that it's not necessary and there's no clear definition's for them as the same properties hold for multiple different types of functions that reflect asset prices and discount factors.
E: didn't really answer your question: given the context I gave you, could an arbitrage opportunity arise if the equality didn't hold? If such does, how would you make money out of it?
I hope this helps you.
| null |
CC BY-SA 4.0
| null |
2023-04-17T02:31:13.087
|
2023-04-17T02:46:18.913
|
2023-04-17T02:46:18.913
|
64734
|
64734
| null |
75245
|
2
| null |
75242
|
2
| null |
This is not normal. I bet that you have not specified the distribution of the standardized innovations to be a bimodal one when specifying the GARCH model. If so, your model is misspecified. And even if there was a possibility to specify a bimodal distribution for standardized innovations, you would probably rather prefer to account for the bimodality in the conditional mean (and perhaps the conditional variance) equation instead.
To give an unrelated, hypothetical example, consider the wage distribution of a population. If the distribution for males has a different peak than the one for females, you might end up with a bimodal distribution for the total population. Why not use a sex dummy for the conditional mean (and probably for some higher-order moments) to account for that instead of trying to find a suitable bimodal distribution? The sex dummy approach makes matters more transparent.
| null |
CC BY-SA 4.0
| null |
2023-04-17T07:33:15.497
|
2023-04-17T07:33:15.497
| null | null |
19645
| null |
75246
|
2
| null |
75231
|
0
| null |
The mean-variance framework is about optimal portfolio choice given the distribution(s) of asset prices/returns. On the other hand, the risk-return trade-off comes from an asset pricing model that produces the distribution(s). Therefore, the following is not quite right:
>
In the mean-variance framework, the only way to get a higher expected return is to be exposed to a higher beta.
This follows from the asset pricing model, not the optimization framework. Hypothetically, if the asset pricing model implied higher risk assets had lower expected returns, exposure to a higher beta would not lead to a higher expected return – whether we use mean-variance optimization or not.
>
However, could it be that a more risk-averse individual has a higher discount rate than a less risk-averse individual?
Risk aversion for a particular individual is characterized by their utility function, not the discount rate. The discount rate may characterize impatience, though. This is in the context of portfolio optimization, e.g. the mean-variance framework. Meanwhile, in an asset pricing model the discount rate characterizes the risk aversion of a representative individual in a market equilibrium.
>
Or does the risk-return trade-off need to hold in all models that assume rationality and market efficiency?
The trade-off is due to the asset pricing model. Hypothetically, we could introduce an asset pricing model where the trade-off goes the other way (higher risk brings about lower expected return) or there is not trade-off at all.
| null |
CC BY-SA 4.0
| null |
2023-04-17T08:23:48.897
|
2023-04-17T08:29:48.277
|
2023-04-17T08:29:48.277
|
19645
|
19645
| null |
75247
|
1
| null | null |
0
|
41
|
I should preface this by saying I am an undergraduate physics student, this is more of a side interest to me, so I apologise if I am missing something obvious. I am not following a formal class or taught guide, just going based off internet research.
I want to demonstrate the difference in the predicted stock options prices from The standard Black-Scholes European Call option formula (assuming constant volatility) compared with what the GARCH(1, 1), ARCH(1) and EGARCH(1, 1) models predict the options price to be (and finally compared to the actual options prices). I have used python to do the following:
- Acquire log returns data as a time series using yfinance on two stocks, e.g. yf.download("GOOGL", start="2010-01-01", end="2022-03-25") and yf.download("AMZN", start="2010-01-01", end="2022-03-25")
- Split the log returns data into two sets, a training set made up of the first 80% of the data, and a testing set made up of the remaining 20% of the data.
- Use a maximum likelihood estimation method (Scipy Minimise) to acquire the parameters for each model ARCH(1), GARCH(1, 1) and EGARCH(1, 1).
- Use these model parameters to forecast volatility in the date range of the testing set, and compare it to the "actual" volatility in this range. The actual volatility is calculated using the standard deviation of the returns in the testing set over a 30 day rolling window (pandas method).
- Plot the rolling standard deviations against the ARCH, GARCH and EGARCH forecasts as a visual representation
But this only shows me the volatility movements within the time range of the testing set time series. How would I acquire a volatility value from each model, to input into the Black-Scholes equation to calculate an options price (with all other variables in B-S held constant except for strike price and stock price which are varied to get prices for in the money and out of the money options)? In the case of the Black-Scholes constant volatility assumption, I could use a single volatility value right, but for ARCH, GARCH and EGARCH since they assume volatility is a function of time, would I let the volatility parameter in B-S become a function of time and therefore the $d_1$ parameter in B-S becomes:
$$d_1 = \frac{\ln\left(\frac{S}{K}\right)+rT-\frac{1}{2}\int^T_0\sigma^2(t) \,\text{d}t}{\sqrt{\int^T_0\sigma^2(t) \,\text{d}t}}$$
where $S$ is the stock price, $K$ is the strike price, $T$ is the time to expiration and $r$ is the risk-free interest rate. I thought if I did this integration, it would give me the cumulative volatility predicted by the GARCH, ARCH and EGARCH model within the time range of the testing set, which I'm not sure would be the right input into the B-S.
|
How to use GARCH/ARCH/EGARCH volatility forecasts to compare the Black Scholes constant volatility assumption with GARCH/ARCH/EGARCH volatility
|
CC BY-SA 4.0
| null |
2023-04-17T15:48:43.490
|
2023-04-17T23:06:24.380
|
2023-04-17T23:06:24.380
|
66110
|
66110
|
[
"volatility",
"black-scholes",
"programming",
"garch"
] |
75248
|
1
| null | null |
0
|
32
|
When I day trade, I easily notice support and resistance areas. Visually it seems very intuitive to spot them, but I can't define them so that I can spot them using python. Can you give me a clear definition of these support and resistance zones? How can I code them in python?
|
Define the supports and resistances
|
CC BY-SA 4.0
| null |
2023-04-17T16:35:40.110
|
2023-04-17T16:50:38.177
|
2023-04-17T16:50:38.177
|
66294
|
66294
|
[
"programming",
"trading"
] |
75249
|
2
| null |
42311
|
2
| null |
Refreshing to see this type of question in this forum. Regarding the first part of the question (let's leave the detailed calculation aside for a moment), I think what the interview is trying to get at is your ability at being able to spot a book's 'general' delta and gamma in the first instance (i.e. by just eyeballing) before actually getting to any computations (e.g. is the book long gamma?).
Normally traders would look at their delta and gamma profiles (these just mean your 1st and 2nd order risks profiles rather than the P/L profile shown in the question) - and these lead to various P/L scenarios (which a trader will be able to 'predict' based on her knowledge of these risk profiles). Here things seem to be the other way around and asks for some reverse engineering. P/L profiles of the sort shown in the question, though revealing and important for management, are not especially useful for traders for exactly this reason.
So in the first instance, it's quite clear this book is long gamma. Why? Well the first thing to answer is what's the delta? As user MaPy has indicated above, this is just -3,957 per 25bps ((-2745-5168)/2). This is your delta now i.e. for 0 shift in rates when your P/L is 0. Now if gamma was zero, you'd see +3,957 in the -25bps column and -3,957 in the +25bps column of the P/L profile. But what you actually see in these columns is +5,168 (=3,957+1,211) and -2,745 (=-3,957+1,211), respectively. In other words your delta changes by +1,211. So the -25bps/+25bps gamma profile is +1,211/+1,211. Hence the book is long gamma both on the up and downside (gamma profiles can be skewed too - not the case here).
| null |
CC BY-SA 4.0
| null |
2023-04-17T17:51:52.363
|
2023-04-17T17:51:52.363
| null | null |
35980
| null |
75250
|
2
| null |
75214
|
4
| null |
Short answer: Yes and no.
Long answer: Yes, as you correctly point out with the Cochrane reference, you can use a factor model to predict stock market returns. How good is that prediction, will depend on how well you are estimating means/variances and covariances. Let's proceed in steps, and let me work as the CAPM as the factor model, but everything below can be extended to a factor model:
- The conditional CAPM implies:
$$ E_t[r_{i}] - r_f = \beta_{i,t} E_t[r_m - r_f] $$
- Covariances and consequently betas are usually stable in short-horizons so I can write:
$$ E_t[r_{i}] - r_f = \beta_{i} E_t[r_m - r_f] $$
- The equation above is valid for one period ahead:
$$ E_{t+1}[r_{i}] - r_f = \beta_{i} E_{t+1}[r_m - r_f] $$
So if you want to predict the stock return of any asset (assuming for now that covariances are stable), you only need to predict the stock market return. Now that's where things get tricky.
The best two references to understand this are:
- Cochrane (2008) - The dog that did not bark
- Goyal and Welch (2007)
The first tells you what economists mean by equity premium being predictable. It basically implies that some variable (or state variable) predicts the equity premium. Cochrane argues that mathematically either dividend growth or returns must be predictable. He shows that the latter is true. Take a look at table (1):
[](https://i.stack.imgur.com/1XKLB.png)
The dividend-price ratio predicts the equity premium. When D/P is high the returns are high. But these are low-frequency in-sample estimates.
The second reference (Goyal), shows that equity premium is predictable in-sample but not out-of-sample. So you cannot trade on this predictability - which basically implies that you cannot forecast the ex-post return (at a monthly frequency). Ex-ante we know that equity premium moves with some state variables in the economy (i.e. expected returns are high in recessions) but in practice this cannot be exploited economically.
So can you predict the return of a stock long-term? Yes - if their beta is stable. Can you predict it next month? No, because you can't predict the market risk-premium.
Let me expand a bit on predictability of the market risk-premium vs predictability of variances/covariances (or betas):
Consider total US stock market between 1928-2022:
- $T_{years} = 95$;
- Average excess return of stocks over $r_f$: $\bar{r}_{annual} = 0.082$
- With a standard deviation of return of: $\sigma_{annual} = 0.197$
What is the confidence interval for the mean (which can be time-varying) and the standard deviation?
- SE error for the mean: $2.02\%$
- Standard error for the volatility $\approx 1.43\%$ (assuming normality)
So confidence interval for the mean: $8.2\% \pm 1.96 \times 2.02\% = [4.23\% - 12.16\%]$
And confidence interval for the volatility: $[0.17 \text{ to } 0.22]$
So means are much harder to estimate than volatilities. And that is the issue: how to forecast the mean return of the market!
| null |
CC BY-SA 4.0
| null |
2023-04-17T23:25:37.507
|
2023-04-18T07:32:42.123
|
2023-04-18T07:32:42.123
|
19645
|
16472
| null |
75254
|
1
|
75255
| null |
1
|
83
|
Suppose there are 3 people A, B, C and a referee. A, B, C individually takes one number from [0,1] with the order A->B->C. B could see the choice of A, C could see the choice of A and B. After that, the referee randomly take a number $Y$ from U(0,1). People who chooses the number which is most closest to $Y$ wins. But people who did a later choice cannot take the same number as the previous one chooses.
So:
- what's the strategy of A, B, and C to be the final winner, if it exists? if not, please state the reason?
- if A takes 0, what is the strategy of B? and is there a strategy to guarantee B is the winner? and the strategy of C?
For question 2), I think as long as B takes a positive number and makes it close to 0, B would be closer to the final number than A. But not sure how he could defeat C...is there anybody who can help me? Thanks!
|
Gaming strategy for "closest number" game
|
CC BY-SA 4.0
| null |
2023-04-18T10:41:06.747
|
2023-04-18T11:28:37.150
|
2023-04-18T11:28:37.150
|
60314
|
60314
|
[
"probability",
"game-theory"
] |
75255
|
2
| null |
75254
|
3
| null |
First try to work out the case with 2 players, for that case figure out what Player B does given player A's choice. Given this you can determine the optimal choice for A, that is the choice that maximizes their chance of winning given they know B will choose optimally.
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:06:27.967
|
2023-04-18T11:06:27.967
| null | null |
848
| null |
75258
|
1
| null | null |
2
|
73
|
In the search of yield holding HY bonds in EM markets seems to be a theme that gains traction. In order to calculate the P&L correctly I just wanted to see I have it correct!
Assumption is
- Borrow cash (at internal cost)
- Sell USD, buy Local ccy at X Forward Points (negative or positive depends on the curve shape)
- Now adjust for duration to find the cost of holding time in basis points
For example Chilean Peso (CLP) Bonds:
Buy CLP 1m @ 803 vs spot @ 800 = -3 peso per a month = 37500 per 10mm USD
Buy a 10y bond with Dur = ~8k
37500/8000 = cost of carry per a month
Is this assumption correct or I am missing something?
|
Cost of carry when holding a foreign ccy bond
|
CC BY-SA 4.0
| null |
2023-04-18T14:42:59.333
|
2023-04-23T22:21:21.713
|
2023-04-19T15:59:20.717
|
5656
|
63409
|
[
"fixed-income",
"cross-currency-basis"
] |
75260
|
1
|
75279
| null |
1
|
104
|
I've been trading crypto futures on Phemex as it is one of the only few exchanges where I can do that from a U.S. IP address. I have always kept an up-to-date local database with OHLC/kline data from Phemex to run my backtests off of, and subsequently build my algos from.
Recently, however, Phemex began delisting all of its [coin]/USD pairs and adding new [coin]/USDT replacements. Therefore, kline data on the USD pairs has stopped and the data on the USDT pairs only goes back a few weeks. I don't feel that merging the USD price data for the older dates with the new USDT data for the recent dates is going to be an accurate representation of the market.
So I was thinking about building a whole new database with price data from another exchange with enough history and plenty of volume, like say, Binance, to run my backtests with, while continuing to do the actual trading on Phemex. Unlike the arbitrage opportunities from a while back, price differences across exchanges are minimal nowadays.
Is anyone out there trading on an exchange other than the one they backtest on? Any thoughts at all on the pros/cons would be greatly appreciated!
|
Backtesting on one exchange, while trading on another?
|
CC BY-SA 4.0
| null |
2023-04-18T15:51:47.103
|
2023-04-20T02:35:14.540
| null | null |
67087
|
[
"market-data",
"data",
"historical-data",
"ohlc"
] |
75261
|
1
| null | null |
1
|
39
|
In the treynor-black model optimal instrument weights are proportional to:
$w_i = \frac{\frac{\alpha_i}{\sigma_i^2}}{\sum_j \frac{\alpha_j}{\sigma_j^2} }$.
Let Instrument 1 be a stock with $\alpha_1$ and $\sigma_1^2$ and let Instrument 2 be a call option with 50 delta on instrument 1. Then for Instrument 2 we have $\alpha_2 = \delta* \alpha_1$ and $\sigma_2^2 = \delta^2 \sigma_1^2$.
Simply plugging in these parameters will show that the model gives a higher weight to instrument 2. Is there any reason (intuitive) reason why the model would favour the call option over the stock? Or is this purely an "artefact" because the model was designed with stocks in mind?
|
Is there economic/intuitive reason why the treynor-black model favour low delta instruments?
|
CC BY-SA 4.0
| null |
2023-04-18T16:42:07.537
|
2023-04-19T12:26:35.993
| null | null |
17316
|
[
"portfolio-optimization",
"call",
"treynor-black"
] |
75262
|
1
| null | null |
0
|
48
|
Bit of a newbie question; but I see this pop up from time to time.
If we have a volatility surface (e.g. for the S&P500) built from market options what more can we do with it, but price other European options on non-traded strikes and maturities.
More specifically, I see people claim to need the volatility surface to value exotic options and risk management. How do they do this?
Note, as I understand it, if we have a Heston Model (for instance) calibrated to options prices, we can value any exotic option we'd like and compute some gradients to our liking. But, we can't get there from only picking out an implied volatility of the European options.
As an example question: Given the vol surface - how do I price a barrier option on the SPX? How do I compute its sensitivities to risk-factors such as spot, vol, etc..
What am I missing here?
|
Pricing and Risk Management of Exotic Options with a Volatility Surface
|
CC BY-SA 4.0
| null |
2023-04-18T16:59:15.843
|
2023-04-18T17:08:24.683
|
2023-04-18T17:08:24.683
|
33872
|
33872
|
[
"volatility",
"implied-volatility",
"risk-management",
"exotics",
"volatility-surface"
] |
75264
|
2
| null |
75130
|
1
| null |
For anyone else wanting to achieve the same thing, here's a custom tool I've created:
```
import yfinance as yf
import numpy as np
from datetime import datetime
def read_input_file(file_name):
with open(file_name, 'r') as file:
lines = file.readlines()
return [tuple(line.strip().split()) for line in lines]
def calculate_best_expiration(ticker, strike_price, commission):
ticker_data = yf.Ticker(ticker)
expirations = ticker_data.options
prices_per_day = []
for expiration in expirations:
data = ticker_data.option_chain(date=expiration).puts
data = data[data['lastTradeDate'].notnull()]
data = data[data['strike'] == float(strike_price)]
days_to_expiration = (datetime.fromisoformat(expiration) - datetime.today()).days
if len(data) > 0 and days_to_expiration > 0:
ask = data['ask'].iloc[0]
bid = data['bid'].iloc[0]
price = (bid + ask) / 2 * 100 - commission
prices_per_day.append(price / days_to_expiration)
else:
prices_per_day.append(np.nan)
best_expiration_index = int(np.nanargmax(prices_per_day))
return expirations[best_expiration_index], prices_per_day[best_expiration_index]
if __name__ == '__main__':
input_data = read_input_file('input.txt')
commission = 2.1 # flat fee per contract in USD
for ticker, strike_price in input_data:
try:
(best_expiration, ratio) = calculate_best_expiration(ticker, strike_price, commission)
print(f'{ticker} ({strike_price}): {best_expiration}, USD ${ratio:.3f}/day/contract')
except Exception as e:
print(f'Error while processing {ticker} ({strike_price}): {e}')
```
The expected input file format is:
```
AAPL 100
INTC 25
...
```
| null |
CC BY-SA 4.0
| null |
2023-04-18T18:19:04.377
|
2023-04-19T09:53:22.000
|
2023-04-19T09:53:22.000
|
66922
|
66922
| null |
75265
|
2
| null |
75236
|
2
| null |
the answer to your question really just boils down to the definition of correlation and regression Beta. You can not mathematically have a positive Beta and a negative correlation. This is mathematically impossible.
$\beta = \frac{cov(y,x)}{var(x)} = \frac{\rho \sigma_y \sigma_x}{\sigma_x^2}$.
i.e. $\beta = \rho \frac{\sigma_y}{\sigma_x} $.
Since volatilities are always positive the $\beta$ has the same sign as $\rho$.
| null |
CC BY-SA 4.0
| null |
2023-04-18T18:40:00.707
|
2023-04-18T18:40:00.707
| null | null |
17316
| null |
75266
|
1
| null | null |
0
|
58
|
I have a question on normalisation of returns when working with high frequency bars, such as 10mins, 30mins or 1hr.
Suppose I have a time series of 1hr bars. I'd like to compute 5hr, daily, weekly, quarterly, volatility normalised returns. would it still be correct to do the normalisation by computing return_5h / (vol_1hr / sqrt(5))? The purpose is to create vol adjusted return features for different time period. return_5h would be computed from 1hr bar using an offset of 5.
This is my first time posting. Apologies if the terminology used is not tight.
|
Normalise 5hr, 10hr, weekly, monthly returns using 1hr time bar
|
CC BY-SA 4.0
| null |
2023-04-18T22:58:33.247
|
2023-04-19T05:20:16.853
| null | null |
67092
|
[
"volatility",
"normalization",
"high-frequency-data"
] |
75267
|
2
| null |
41218
|
-1
| null |
Check the code below for basic logic. You may need to improvise it (may be).Just a quick and dirty work. In case it works, reshare. Enjoy !!
How PID controller works - a crash (fast fast) course ?
- PID controller works on Error to minimise error !! Wow, that is mouthful. Error(e) = SetPoint (SP) - ProcessValue (PV). This is the most basic thing.
- Now there are three terms in PID controller equation which are actually multipliers
-
kp - this dictates how much the output change with each unit change if error (repeat error).
-
Kd - Tricky part starts from here. Kd is a multiple also but it is multiple of rate of change of error i.e. how much error is changing wrt time
-3. Ki - Ki is also multiple but it is multiple of a term which has past memory of error i.e. Ki multiples summation of error.
Now you see...Kp dicatats how much you output changes wrt to difference between setpoint and actual value i.e. proprotional part , Kd dicatact how the rate of change of error affect the output, ki dictate how much error is minimised. Suppose Setpoint and output becomes same then error is zeror, there is no rate of change of error and summation of error is also zero.
`
//@version=5
indicator("PID Controller", overlay=false)
// Input Variables
lookback = input.int(title="Lookback Period", defval=20, minval=1)
kp = input.float(title="Kp", defval=0.1, minval=0)
kd = input.float(title="Kd", defval=0.1, minval=0)
ki = input.float(title="Ki", defval=0.1, minval=0)
price_src = input(close, title="Price Source")
// Variables
var float error = 0.0
var float error_sum = 0.0
var float error_diff = 0.0
var float pid = 0.0
// Arrays
var float[] pid_array = array.new_float(0)
// Loop
for i = 0 to 10
// Calculate error and PID
error := price_src - ta.sma(price_src, lookback)
error_sum := error_sum + error
error_diff := error - nz(error[1])
pid := kperror + kierror_sum + kd*error_diff
// Add PID value to array
array.push(pid_array, pid)
// Wait for next bar
//bar_wait(0)
// Calculate average PID value
var pid_sum = 0.0
for i = 0 to array.size(pid_array)-1
pid_sum := pid_sum + array.get(pid_array, i)
var float pid_avg = pid_sum / array.size(pid_array)
// Plotting
plot(pid_avg, color=color.green, linewidth=1, title="PID")
`
| null |
CC BY-SA 4.0
| null |
2023-04-19T02:02:33.820
|
2023-04-23T14:23:55.280
|
2023-04-23T14:23:55.280
|
66863
|
66863
| null |
75268
|
1
| null | null |
2
|
79
|
Would be grateful for any assistance.
Below are the expected value and variance of the integral of the short rate under the Vasicek model ([https://www.researchgate.net/publication/41448002](https://www.researchgate.net/publication/41448002)):
$E\left[ \int_{0}^{t}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=\frac{(r_{0}-b)}{a}(1-e^{-at})+bt$
$Var\left[ \int_{0}^{t}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=\frac{\sigma^{2}}{2a^{3}}(2at-3+4e^{-at}-e^{-2at})$
But what if I would like to find the following:
$E\left[ \int_{t_{1}}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{} \\
Var\left[ \int_{t_{1}}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{} \\
where \;\;0\lt t_{1}\lt t_{2}$
My question is, can I simply rewrite the above expressions as:
$E\left[ \int_{t_{1}}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=E\left[ \int_{0}^{t_{2}}r(u)du - \int_{0}^{t_{1}}r(u)du|\mathcal{F_{0}}\right]\mathcal{} = \frac{(r_{0}-b)}{a}(e^{-at_{1}}-e^{-at_{2}})+b(t_{2}-t_{1})$
$Var\left[ \int_{t_{1}}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{} = Var\left[ \int_{0}^{t_{2}}r(u)du-\int_{0}^{t_{1}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=\\=Var\left[ \int_{0}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}+Var\left[ \int_{0}^{t_{1}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}-2Var\left[ \int_{0}^{t_{1}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=\\=Var\left[ \int_{0}^{t_{2}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}-Var\left[ \int_{0}^{t_{1}}r(u)du|\mathcal{F_{0}}\right]\mathcal{}=\\=
\frac{\sigma^{2}}{2a^{3}}(2a(t_{2}-t_{1})+4(e^{-at_{2}}-e^{-at_{1}})-e^{-2at_{2}}+e^{-2at_{1}})\\\\$
|
Expected value and variance of the short rate under the Vasicek model
|
CC BY-SA 4.0
| null |
2023-04-19T03:34:06.487
|
2023-04-19T03:34:06.487
| null | null |
67095
|
[
"stochastic-processes",
"stochastic-calculus",
"vasicek",
"stochastic-integral"
] |
75269
|
2
| null |
75014
|
1
| null |
You’ve probably figured this out by now. But you’re confusing OTM calls with below the money calls. An OTM call is above the money, not below.
| null |
CC BY-SA 4.0
| null |
2023-04-19T04:01:44.893
|
2023-04-19T04:01:44.893
| null | null |
67097
| null |
75270
|
2
| null |
75266
|
-1
| null |
I am testing Google Bard's capability for quantitative methods and saw your question. Asked your question and here's the answer:
To calculate monthly volatility based on hourly return data, you can use the following steps:
- Calculate the hourly returns for the month.
- Calculate the daily returns by taking the average of the hourly returns for each day.
- Calculate the monthly returns by taking the average of the daily returns for each month.
- Calculate the standard deviation of the monthly returns.
The standard deviation is the monthly volatility.
For example, let's say you have hourly return data for the month of January. The following steps would show you how to calculate the monthly volatility:
Calculate the hourly returns for January.
| Hour | Return |
| ---- | ------ |
| 1 | 0.5% |
| 2 | -0.3% |
| 3 | 0.2% |
| ... | ... |
| 23 | 0.1% |
| 24 | -0.4% |
Calculate the daily returns by taking the average of the hourly returns for each day.
|Day | Return|
|--- | ------|
|1 | 0.1% |
|2 | -0.2% |
|3 | 0.1% |
|... | ... |
|31 | -0.2% |
Calculate the monthly returns by taking the average of the daily returns for each month.
Month | Return
------- | --------
January | -0.1%
Calculate the standard deviation of the monthly returns.
Standard Deviation | 0.2%
The standard deviation is the monthly volatility.
In this example, the monthly volatility for January is 0.2%. This means that the price of the asset can be expected to fluctuate by up to 0.2% in either direction from the mean price in a given month.
It is important to note that volatility is a measure of historical price changes. It does not predict future price changes.
| null |
CC BY-SA 4.0
| null |
2023-04-19T05:14:20.210
|
2023-04-19T05:20:16.853
|
2023-04-19T05:20:16.853
|
63042
|
63042
| null |
75271
|
1
| null | null |
3
|
113
|
In the paper by Longstaff and Schwartz on American option pricing, the continuation value at time $t_k$ is given by:
\begin{align}
F(\omega;t_k) = \mathbb{E}_Q\Big[\sum_{j=k+1}^Kexp\Big(-\int_{t_k}^{t_j}r(\omega,s)ds\Big)C(\omega;t_j;t_k,T)\Big|\mathcal{F}_{k}\Big].
\end{align}
Why do we need the expected value in the above equation? Note that the formula is pathwise ($\omega$ is fixed). In other words, this is the future discounted cash flow for a fixed path that has already been simulated. What is the expectation averaging over?
|
Continuation value in Longstaff-Schwartz: Why the expected value?
|
CC BY-SA 4.0
| null |
2023-04-19T07:26:54.387
|
2023-04-21T10:54:56.537
|
2023-04-19T12:15:25.117
|
5656
|
26887
|
[
"option-pricing",
"monte-carlo",
"american-options",
"simulations",
"longstaff-schwartz"
] |
75273
|
2
| null |
75261
|
2
| null |
So after thinking a bit about it I found the solution:
The formula in the treynor-black model only applies if the covariance matrix of the idiosyncratic risk is independent. The introduction of a call option, is locally simply a (de)leveraged version of the stock.
From a optimization perspective the call option is (locally) redundant. Therefore we can not simply plug in the values in the formula.
So it was a case of using the correct formula for the wrong situaton.
| null |
CC BY-SA 4.0
| null |
2023-04-19T12:26:35.993
|
2023-04-19T12:26:35.993
| null | null |
17316
| null |
75274
|
1
| null | null |
0
|
66
|
Suppose there are only two risky assets and we want to optimize our portfolio. Constraints are that we have a minimum return $\overline{r}$ and we can only invest $w_1 + w_2 = 1$.
Is it possible that in this setting the constraint $w_1 \times r_1 + (1-w_1) \times r_2 = \overline{r}$ always solves the problem or am I doing something wrong here?
I tried to set it up with Lagrangian: The constraint with $\lambda$ always provides me directly with the solution.
But how is that? I mean it seems strange that the solution is completely independent of the variance and covariance.
|
Markowitz Optimization with 2 assets
|
CC BY-SA 4.0
| null |
2023-04-19T15:01:11.807
|
2023-04-19T16:42:59.983
|
2023-04-19T16:01:49.593
|
5656
|
67105
|
[
"mean-variance",
"markowitz"
] |
75275
|
2
| null |
75274
|
2
| null |
With $r_1$ and $r_2$ constants and your constraint the return equation reduces to an affine function (a line in the plane) which indeed has only one solution for a given level.
| null |
CC BY-SA 4.0
| null |
2023-04-19T16:42:59.983
|
2023-04-19T16:42:59.983
| null | null |
848
| null |
75276
|
1
|
75281
| null |
2
|
94
|
The Nelson-Siegel model has the following form
$y(\tau)={}_{1}X+{}_{2}X\frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}+{}_{3}X\left ( \frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}-e^{-\lambda{\tau}} \right)$
We denote factor loadings for each parameters as $1$, $\frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}$, $\left ( \frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}-e^{-\lambda{\tau}} \right)$. Do here factor loadings mean the same as in the factor analyses?
Or how can I define factor loadings in Nelson-Siegel model?
|
Factor loadings in Nelson-Siegel model
|
CC BY-SA 4.0
| null |
2023-04-19T16:44:26.060
|
2023-05-04T07:47:13.637
|
2023-05-04T07:47:13.637
|
848
|
44881
|
[
"fixed-income",
"yield-curve",
"factor-loading"
] |
75277
|
1
|
75283
| null |
1
|
66
|
Taken from the book:
$\Delta{S}$ - Change in spot price, S, during a period of hedge.
$\Delta{F}$ - Change in futures price, F, during a period of hedge.
If we assume that the relationship between $\Delta{S}$ and $\Delta{F}$ is approximately linear, we can write:
$\Delta{S} = a + b\Delta{F} + e$
where a and b are constants and e is an error term. Suppose that the hedge ratio is h (futures size position/exposure).
EVERYTHING IS CLEAR TILL NOW
Then the change in the value of the position per unit of exposure to S is
$\Delta{S} - h\Delta{F} = a + (b-h)\Delta{F} + e$
- If I understand correctly, $\Delta{S}$ - $h\Delta{F}$ is change of spot price - change of futures price related to my position. Let's assume that hedge ratio is 1. Then $\Delta{S}$ - $h\Delta{F}$ is just a difference between spot price change and futures price change, why do I need it?
- Why in $a + b\Delta{F} + e$ b was replaced by (b - h) when I subtracted $h\Delta{F}$ from $\Delta{S}$ ?
- What is the main idea of my calculations?
|
Calculating the Minimum Variance Hedge Ratio
|
CC BY-SA 4.0
| null |
2023-04-19T20:43:31.370
|
2023-04-20T12:11:57.107
|
2023-04-19T20:44:10.110
|
67108
|
67108
|
[
"hedge",
"minimum-variance",
"optimal-hedge-ratio"
] |
75278
|
1
| null | null |
1
|
52
|
[Options Pricing and Mean Reversion](https://quant.stackexchange.com/questions/45976/options-pricing-and-mean-reversion)
In the question above, in the accepted answer, the writer claims:
"For instance with a 100% mean reversion a 20% historical annual standard deviation would translate into approximately 30% instantaneous volatility."
I understand that the instantaneous volatility should be higher since annual changes will be less due to mean-reversion. But how did they get 20% annual std dev is approximately 30% instantaneous std dev?
Is there a formula I am missing?
Thanks in advance.
|
Converting Annual Vol to Instantaneous Vol with Mean Reversion
|
CC BY-SA 4.0
| null |
2023-04-19T20:54:03.207
|
2023-04-19T20:54:03.207
| null | null |
66638
|
[
"volatility",
"mean-reversion"
] |
75279
|
2
| null |
75260
|
3
| null |
This is a common issue in crypto and there are a few factors that you should consider. Although there could be a slight difference in price across the exchanges any price gap will be filled by arbitragers. In doing so, there will be a slight delay, a cost (exchanges have different trading fees) and overall liquidity of the exchange. Also, it depends and what kind of backtests you are running. I will explain each in more detail.
Time: let’s your strategy is based on 1 second bar. There will be a huge difference across exchanges, and this will impact your result. For 1 min bar and liquid tokens, the difference will improve but still it could have some impact. Especially, exchanges like Binance usually lead the moves. However, I wouldn’t be worried if my strategy is using 1h bar or 1D bar. That is because the bar size will be large, and the difference percentage will be small.
Cost: Basically, when the exchange has a high trading cost, slight gaps are ignored, and the delays are longer. Generally speaking, let’s say transaction cost is 10bps and filling the gap will earn you less than that, you probably won’t try to achieve that.
Liquidity: There should be enough liquidity otherwise the difference will be large. I have seen tokens that don’t trade for hours in some exchanges will there are few trades in other. Also, the can be an overreaction which can impact the bar size.
Now, the important factor is how you are using the bar data. If you are using it for technical indicators usually there’s some average going on, so those slight differences do not matter.
How frequently do you trade and what is your average win per trade in bps? – If the average win is over 100bps then the bar difference will have few bps impact so the difference shouldn’t matter. But if it is 5-10 bps then I would be worried.
How is the order filling implemented in your backtest engine? - that is if your order’s limit price is current bar close and you assume filled or you assume you get filled at next bar open. Those will have an impact on number of trades.
| null |
CC BY-SA 4.0
| null |
2023-04-20T02:35:14.540
|
2023-04-20T02:35:14.540
| null | null |
63042
| null |
75281
|
2
| null |
75276
|
2
| null |
Yes, in general these factor loadings have the same interpretation as in factor analysis, there are slight difference though. In both cases, factor loadings indicate the contribution of underlying factors to the observed data. In this respect, they have the same meaning.
The Nelson-Siegel model is used to estimate the term structure of interest rates (yield curve) based on observed bond yields using three underlying factors: the level, slope, and curvature. The model uses these three factors and their loadings to explain the variation in observed yields across different maturities.
In your equation for the Nelson-Siegel model:
$y(\tau)={}_{1}X+{}_{2}X\frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}+{}_{3}X\left ( \frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}-e^{-\lambda{\tau}} \right)$
$y(\tau)$ represents the yield for a given maturity $\tau$, and ${}_{1}X$, ${}_{2}X$, ${}_{3}X$ are the factors representing the level, slope, and curvature of the yield curve, respectively. The three factor loadings for the Nelson-Siegel model are:
- $1$: The level factor has a loading of 1, which means that it influences the yield curve's overall level or long-term rates.
- $\frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}$: The slope factor loading indicates how the slope of the yield curve changes as maturity increases. It's mainly associated with the difference between short-term and long-term rates.
- $\left ( \frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}-e^{-\lambda{\tau}} \right)$: The curvature factor loading represents the non-linear aspects of the yield curve, capturing the hump or the deviation from a straight line.
In general, factor analysis is just a statistical method used to identify the relationship between a target variable and a set of variables called factors. In this sense the factor analysis in the Nelson-Siegel model is the same as the factor analysis used to establish the well known Fama-French factors (which usually are referred to when referring to factor investing), the target variable and the factors themselves are different though, while the concept is the same.
When comparing to other factor analysis approaches, the notation might possibly be leading to confusion. ${}_{1}X$, ${}_{2}X$, ${}_{3}X$ are the parameters which would be similar to $\beta_1$, $\beta_2$, $\beta_3$ in the usual notation for a (Fama-French) factor model and $\lambda_\tau > 0$ is an additional parameter that imposes structure on the explaining variables $1$, $\frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}$ and $\left ( \frac{1-e^{-\lambda{\tau}}}{{\lambda{\tau}}}-e^{-\lambda{\tau}} \right)$ which serve a similar role as $X_1$, $X_2$ and $X_3$ in the (Fama-French) factor model. As opposed to a (Fama-French) factor model, the explaining variables are not observed data, but are constructed from $\lambda_\tau$ which also needs to be estimated.
| null |
CC BY-SA 4.0
| null |
2023-04-20T09:20:43.043
|
2023-04-20T09:52:22.513
|
2023-04-20T09:52:22.513
|
5656
|
5656
| null |
75282
|
2
| null |
63241
|
3
| null |
I am not aware of any such service but simulating exchange behavior for backtest is very challenging given irregular order arrival and their impact and hidden orders. Even the paper trading service’s order execution is very simple and has limitations (if you send a limit order it will get filled by price-touch-limit and size is irrelevant). Even if you find some Execution Management Service to install locally you would still need data and a time series database to use them. Below are some considerations if you want to build your own tool.
A good order execution system should take into account market order, limit order, slippage, latency and calculating average fill price and size for both market and limit orders.
If a simplified option satisfies your needs, you can implement it by yourself. You would need historical order book data up to N level (snapshot or update both are fine). When you send a market order of a certain size you go down the orderbook and keep reducing size until the final size is zero. By doing so you can calculate the average fill price (weighted by size at that price) and slippage. When you send a limit order, you can check if a better fill is available or not. If yes then you can proceed like market order but the tricky part is when you reach limit price and remaining size is greater than zero (You can assume fill or partial-fill). Depending on granularity of your order book data you can implement some logic to take latency into account.
[EDIT]
For advanced use cases you can refer to following open source repo:
[https://github.com/exchange-core/exchange-core](https://github.com/exchange-core/exchange-core)
| null |
CC BY-SA 4.0
| null |
2023-04-20T09:28:21.347
|
2023-05-09T08:25:38.287
|
2023-05-09T08:25:38.287
|
63042
|
63042
| null |
75283
|
2
| null |
75277
|
2
| null |
Hedging is when you are long one thing and short another thing, with the hope that the overall portfolio will be stable, it will not change much in value. Here the hedge position is: long 1 unit of S and short h units of F. Therefore the profit/loss or change in value of the position is $\Delta S−h \Delta F$. And $h$ is called the hedge ratio.
Now substitute the expression for $\Delta S$ in this, we get that the p/l is $
(a+b \Delta F+e)−h \Delta F=a+(b−h) \Delta F+e$.
The main purpose of this calculation is to find out what $h$ should be, and the conclusion will be that $h$ should be set equal to $b$, the linear regression coefficient. That will zero out the middle term, the other 2 terms $a$ and $e$ we cannot do anything about.
You say "let us assume the hedge ratio is 1", that is OK but it is a strange assumption, we are trying to calculate a value for $h$.The best we can do to minimize the variance is set h = b.
| null |
CC BY-SA 4.0
| null |
2023-04-20T12:04:21.770
|
2023-04-20T12:11:57.107
|
2023-04-20T12:11:57.107
|
16148
|
16148
| null |
75284
|
2
| null |
33436
|
0
| null |
Perhaps you can give a few cases where your code does not reproduce the SABR formula.
Fix beta=1 and start with a=0. This should reduce to the Black-Scholes model. The code should match the BS formula. Then increase the vol-of-vol a, first with zero correlation and then changing it to some non-positive value, e.g. -0.7.
| null |
CC BY-SA 4.0
| null |
2023-04-20T12:09:13.140
|
2023-04-20T12:09:13.140
| null | null |
67115
| null |
75285
|
1
| null | null |
1
|
41
|
(I know there are existing questions on this topic, but none seem to be for commercial re-distribution use, so please keep this question active.)
It seems there are many websites offering API for stock market data.
I want to create a company so that I can sell my data analysis to other people.
Which website has reliable and cheap data for commercial re-distribution in your experience?
I am looking at US and other important country(Germany, France, UK, Japan, India, Australia) data's for:
- Stocks price history and daily data going forward
- Company financials like income statement
- Commodity and currency data (historical and day wise going forward)
Please suggest good websites in the range of $50-$100 per month that will allow commercial re-distribution of data.
|
API for stock price data for commercial re-distribution?
|
CC BY-SA 4.0
| null |
2023-04-20T13:29:42.807
|
2023-04-20T16:07:16.140
| null | null |
67116
|
[
"programming",
"market-data",
"historical-data",
"currency",
"commodities"
] |
75286
|
2
| null |
74415
|
2
| null |
In the normal model framework $\Theta=-\Gamma\sigma^2/2$ is indeed a good theoretical rule-of-thumb. However, you're ignoring the impact of the roll-down of your underlying 1y1y fwd swap in this theta calculation, while your model probably isn't.
| null |
CC BY-SA 4.0
| null |
2023-04-20T13:39:36.423
|
2023-04-20T13:39:36.423
| null | null |
35980
| null |
75287
|
1
| null | null |
1
|
60
|
I am new to the quantitative finance side of things( came from mathematical physics). I'm currently investigating numerical techniques for solving BS, which made realise when are numerical techniques actually required in the first place(i.e. no analytic closed form solution). Any help on this matter would be much appreciated.
Regards,
Eddie
|
When does a closed form analytic solution exist/not exist for the value of an option given by the BS eqn?
|
CC BY-SA 4.0
| null |
2023-04-20T14:58:34.787
|
2023-04-20T21:35:27.757
| null | null |
67120
|
[
"option-pricing"
] |
75288
|
2
| null |
75285
|
1
| null |
Reputable vendors that have these dimensions and would allow redistribution for the price you suggest do not exist.
| null |
CC BY-SA 4.0
| null |
2023-04-20T16:07:16.140
|
2023-04-20T16:07:16.140
| null | null |
848
| null |
75290
|
1
| null | null |
1
|
27
|
I am looking for tax-analysis for several ETFs and Mutual Funds. Morningstar used to have a tool that would show estimates of pretax returns and tax-adjusted returns for many funds (depending on their turnover), see screenshot below:
[](https://i.stack.imgur.com/BFOiK.png)
This tool seems to have been either discontinued or it was moved to somewhere, where I cannot find it. Does anyone know of any (ideally free) tool that would do such ballpark analysis?
|
ETF and Mutual Funds turnover tax analysis
|
CC BY-SA 4.0
| null |
2023-04-20T17:14:39.710
|
2023-04-20T17:14:39.710
| null | null |
16472
|
[
"turnover",
"taxes"
] |
75291
|
2
| null |
75287
|
3
| null |
As previously stated, analytical solutions in option pricing are the exception. So indeed, as Jan Stuller points out, numerical techniques are the tool to solve many problems.
To answer your question, within the context of the Black Scholes Merton model, there is one use of numerical techniques I can come up with: calculating the implied volatility when all the other parameters and price are known. Of course, this has been studied, [see this question](https://quant.stackexchange.com/q/7761/848).
| null |
CC BY-SA 4.0
| null |
2023-04-20T21:09:14.293
|
2023-04-20T21:35:27.757
|
2023-04-20T21:35:27.757
|
5656
|
848
| null |
75293
|
1
| null | null |
0
|
41
|
I need to analyze the Consolidated Schedule of Investment Tables found in the 10-Q of several companies. For example, from the following 10-Q report: [https://www.sec.gov/Archives/edgar/data/1655888/000095017021000801/orcci_10q_2021-06-30.htm#soi](https://www.sec.gov/Archives/edgar/data/1655888/000095017021000801/orcci_10q_2021-06-30.htm#soi)
I would need the tables from every page that has "Consolidated Schedule of Investments" at the top (in this case pg. 4-36).
I'm having trouble getting these tables into Excel or R for analysis. I've been researching and trying things but I'm honestly a bit over my head here being fairly new to programming.
Is there a place where I can download these tables or a way to access them through R or Excel?
|
How do I Download Consolidated Schedule of Investment Tables from a Company's 10-Q?
|
CC BY-SA 4.0
| null |
2023-04-21T04:21:08.857
|
2023-04-23T22:17:27.587
| null | null |
67128
|
[
"data"
] |
75294
|
1
| null | null |
5
|
453
|
I will be going through interview processes in next months.
I would like to have a book/reference to practice the manipulation of PDE, and stochastic calculus questions.
For example, I get a bit confused when I am deriving the Fokker-Planck equation, as PDE manipulation is necessary.
I am seeking for a book/reference containing exercises and solutions, so I can practice.
Thank you very much for your help.
|
Book/reference to practice stochastic calculus and PDE for interviews
|
CC BY-SA 4.0
| null |
2023-04-21T15:48:07.390
|
2023-04-23T22:15:21.480
| null | null |
25437
|
[
"stochastic-calculus",
"differential-equations"
] |
75295
|
2
| null |
75294
|
8
| null |
You may like:
[Probability and Stochastic Calculus Quant Interview Questions by Ivan Matić, Radoš Radoičić, Dan Stefanica](https://rads.stackoverflow.com/amzn/click/com/1734531223)
[150 Most Frequently Asked Questions on Quant Interviews, Second Edition by Dan Stefanica, Radoš Radoičić, Tai-Ho Wang](https://rads.stackoverflow.com/amzn/click/com/097975769X)
[Heard on the Street: Quantitative Questions from Wall Street Job Interviews by Timothy Falcon Crack](https://rads.stackoverflow.com/amzn/click/com/0994103867)
[A Practical Guide To Quantitative Finance Interviews by Xinfeng Zhou](https://rads.stackoverflow.com/amzn/click/com/B0BL7QGYV2)
[Quant Job Interview Questions And Answers by Mark Joshi, Nick Denson, Andrew Downes](https://rads.stackoverflow.com/amzn/click/com/0987122827)
[Cracking the Finance Quant Interview: 51 Interview Questions and Solutions by Jean Peyre](https://rads.stackoverflow.com/amzn/click/com/B08D4F8RJP)
| null |
CC BY-SA 4.0
| null |
2023-04-21T16:28:46.403
|
2023-04-21T16:34:02.610
|
2023-04-21T16:34:02.610
|
36636
|
36636
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.