Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
75561
|
5
| null | null |
0
| null |
Variance swaps are over-the-counter financial derivatives that allow investors to hedge their exposure to the magnitude of possible price movements of underliers, such as exchange rates, interest rates, or stock indexes. They offer direct exposure to the volatility of an underlying asset without the path-dependency issues associated with delta-hedged options. Variance swaps give investors a payout equal to the difference between realized variance (the square of the standard deviation) and a pre-agreed strike level.
[Corporate Finance Institute-Variance Swap](https://corporatefinanceinstitute.com/resources/derivatives/variance-swap/)
[Risk.net-Variance Swap](https://www.risk.net/definition/variance-swap)
[Quantlabs.net](https://quantlabs.net/academy/download/free_quant_instituitional_books_/%5BJP%20Morgan%5D%20Variance%20Swaps.pdf)
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:25:45.973
|
2023-05-17T12:25:45.973
|
2023-05-17T12:25:45.973
|
26556
|
26556
| null |
75562
|
4
| null | null |
0
| null |
Variance swaps are over-the-counter financial derivatives that allow investors to hedge their exposure to the magnitude of possible price movements of underliers, such as exchange rates, interest rates, or stock indexes. [Wikipedia: Variance Swap](https://en.wikipedia.org/wiki/Variance_swap)
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:25:45.973
|
2023-05-17T12:25:45.973
|
2023-05-17T12:25:45.973
|
26556
|
26556
| null |
75563
|
2
| null |
75559
|
3
| null |
Well there can definitely be a risk-neutral measure but only one of the processes is a tradable. For instance, consider the total return process of a stock $S_t$
$$
dS_t = \mu S_t dt + \sigma S_t dW_t
$$
and its price return
$$
dP_t = (\mu - q)P_t dt + \sigma P_t dW_t, \quad P_0 := S_0
$$
Under the risk -neutral measure you have exactly the same SDEs with $r$ replacing $\mu$, but only one of them is tradable. In this example the total return process is tradable, the price return is not.
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:37:22.220
|
2023-05-17T12:37:22.220
| null | null |
65759
| null |
75564
|
1
| null | null |
3
|
29
|
It seems incredibly difficult to not only come up with a list of options for active risk of private assets (Private Equity, Private Credit, Infrastructure, Real Estate, etc.) but also get senior people agree on what the active risk limit should be. The crux of the issue appears to be modelling assets in your portfolio and choosing sensible and representative benchmarks. Are there any resources on measuring and setting active risk limits for a fund that invest both in public and private assets?
|
Active risk management of private assets
|
CC BY-SA 4.0
| null |
2023-05-17T15:19:47.633
|
2023-05-17T15:19:47.633
| null | null |
19096
|
[
"portfolio-management",
"risk-management",
"private-equity",
"active-investing"
] |
75566
|
1
| null | null |
0
|
50
|
I'm thinking about the relationship between the volatility of a bunch of single stocks and the US VIX - as the companies in question tend to export heavily to the US.
To get a grasp over the beta involved, I was thinking about taking 5-day changes in the single stock implied vols (interpolated to a constant maturity like the VIX), 5-day changes in the VIX, and regressing the former on the latter.
But should I be taking log-changes instead?
|
What is the correct way to think about volatility - in raw terms or logs?
|
CC BY-SA 4.0
| null |
2023-05-17T17:53:24.470
|
2023-05-17T17:53:24.470
| null | null |
42552
|
[
"volatility"
] |
75568
|
1
|
75578
| null |
3
|
59
|
I am trying to solve this problem where we're asked to compute the quadratic variation of a process.
I assume that it is necessary to apply Ito's formula but not sure how to get the right solution.
Furthermore, I'm also not sure about how to apply Ito's formula to a function that includes integrals like this one. I am familiar with the basic Ito Formula and know how to apply it to simpler functions.
Let N be a (P,F)-Poisson process with parameter $\lambda$ > 0 and define the process:
X = $(X)_t$
$$ X_t = 2 + \int_{0}^{t} \sqrt{s} dW_s + N_t $$
Compute $[X]_t$
is this attempt correct?
$$ X_t = 2 + \int_{0}^{t} \sqrt{s} dW_s + N_t $$
which we can write in differential form as:
$$ dX_t = \sqrt{t}dW_t + dN_t $$
then the quadratic variation is given as:
$$ d[X_t] = dX_t * dX_t = (\sqrt{t}dW_t + dN_t) * (\sqrt{t}dW_t + dN_t) = tdt + dNt $$
I assumed that the cross terms cancel out. Then we can then rewrite it in integral form to get $[X_t]$
$$ [X_t] = \int_{0}^{t} t dt + \int_{0}^{t} dN_t = \frac{t^2}{2} + N_t $$
If this attempt is correct, I am not quite sure why $dN_t * dN_t = dN_t$ would be true in the quadratic covariation step.
|
Quadratic Variation Of Mixed Brownian Motion and Poisson Process
|
CC BY-SA 4.0
| null |
2023-05-17T23:16:10.930
|
2023-05-18T14:12:20.367
|
2023-05-17T23:18:13.050
|
67457
|
67457
|
[
"brownian-motion",
"poisson-process"
] |
75571
|
2
| null |
69034
|
0
| null |
I noticed this with my data (options on Indexes and Commodities futures). When option is Deep ITM and close to expiration, there is not enough extrinsic value to compensate for the lower intrinsic value and option may have a price below price - strike. As this is not actually true (Dealers can buy below that price but they will always sell above), I believe it's safe to assume the option price must be greater or equal to (price - strike) + interest (or (strike - price) + interest, for DITM Puts).
My solution was to increase the price of the option until further that level and the I didn't have this problem anymore.
| null |
CC BY-SA 4.0
| null |
2023-05-18T02:09:32.480
|
2023-05-18T02:09:32.480
| null | null |
67458
| null |
75572
|
1
| null | null |
0
|
61
|
This is the simplest backtest I've come up with, yet I can't figure out how TradingView has calculated the Sharpe ratio to be 0.577. I've set the risk_free_rate=0. Is it possible to extract the formula that TradingView is using from this simple example, or more data is needed?
[](https://i.stack.imgur.com/BxLzK.png)
|
Figuring out how TradingView calculates the Sharpe ratio
|
CC BY-SA 4.0
| null |
2023-05-18T03:08:19.883
|
2023-05-18T09:12:52.553
|
2023-05-18T03:14:25.437
|
59453
|
59453
|
[
"returns",
"backtesting",
"sharpe-ratio",
"standard-deviation"
] |
75574
|
1
| null | null |
3
|
106
|
My question is related to [this thread](https://quant.stackexchange.com/questions/1710/proof-that-you-cannot-beat-a-random-walk), but I'm interested in a special case. Suppose that the price of an asset starts at 100 USD, and changes according to a geometric random walk; one step of 1% either up or down at each second. You start with an equity of 100 USD, and you put 100 USD at each trade.
The strategy is a very simple one-sided trade. Two orders are set repeatedly: A buy order at $x_{buy}=(100-1)^{n1}$ and then a sell order at $x_{sell} = (100+1)^{n_2}$, where $n_1$ and $n_2$ are positive integers.
We know that the price will hit $x_{buy}$ and $x_{sell}$ an infinite number of times, so this strategy is determined to be profitable (however small) in the long term. The problem is, it's an extremely slow strategy.
Suppose that you want to maximize the expected profit over a 1 year period. What values for $n_1$ and $n_2$ are optimal?
|
Maximum profit from trading on a random walk with a specific strategy
|
CC BY-SA 4.0
| null |
2023-05-18T04:28:18.120
|
2023-05-19T19:06:20.360
|
2023-05-19T19:06:20.360
|
59453
|
59453
|
[
"trading",
"random-walk",
"strategy"
] |
75575
|
2
| null |
75572
|
2
| null |
According to the documentation, it requires at least 3 periods.
Given the output shows 1% profit you get the following in Python,
```
# define return vector
ret = [0.01,0,0]
# compute SR
round(np.mean(ret)/np.std(ret,ddof=1),3)
```
which yields 0.577
I voted to close this question because it is off topic here. It's probably suited for money stack exchange but it is not a quantitative finance question.
| null |
CC BY-SA 4.0
| null |
2023-05-18T09:12:52.553
|
2023-05-18T09:12:52.553
| null | null |
54838
| null |
75576
|
1
| null | null |
1
|
63
|
Let (B, S) a multi period binomial model that is arbitrage free.
I would like to prove that the unique optimal exercise time for an American call option is the maturity time T. My idea is to prove this result by contradiction. I want to find an exercise time t<T, such that there exists an arbitrage opportunity, so that the value of the portfolio at time t is greater than zero, starting with an initial value equal to zero. Is this a good idea, and how can I find such strategy explicitely?
|
Optimal exercise time in Binomial model
|
CC BY-SA 4.0
| null |
2023-05-18T12:35:23.010
|
2023-05-18T12:35:23.010
| null | null | null |
[
"american-options",
"binomial-tree",
"binomial"
] |
75577
|
1
| null | null |
0
|
22
|
I would appreciate any help or guidance please as I do not understand how to find the corresponding portfolio to use as benchmark to find the Excess Return that is used in the Baseline Regression model of Faulkender & Wang (2006). My doubt is with the dependent variable:
$$
r_{i,t}-R^{B}_{i,t} = \text{independent variables} + \epsilon_{i,t}
$$
Where:
$$r_{i,t}$$ is stock return during year t
and
$$R^{B}_{i,t}$$
is the benchmark portfolio return at year t.
Where they explain:
>
"To arrive at our estimate of the excess return, we use the 25 Fama and French portfolios formed on size and book-to-market as our benchmark portfolios. A portfolio return is a value-weighted return based on market capitalization within each of the 25 portfolios. For each year, we group every firm into one of 25 size and BE/ME portfolios based on the intersection between the size and book-to-market independent sorts. Fama and French (1993) conclude that size and the book-to-market of equity proxy for sensitivity to common risk factors in stock returns, which implies that stocks in different size and book-to-market portfolios may have different expected returns. Therefore, stock i’s benchmark return at year t is the return of the portfolio to which stock i belongs at the beginning of fiscal year t. To form a size- and BE/ME-excess return for any stock, we simply subtract the return of the portfolio to which it belongs from the realized return of the stock." (p.1967)
- I have downloaded the different files from Fama and French needed, but I do not understand how to know which of the 25 portfolios should I select to calculate the excess return.
- I have the returns for each of the companies I need.
Thanks in advance, and if this is not the place to post, I would thank you for any help. Best.
- Reference: Faulkender, M., & Wang, R. (2006). Corporate financial policy and the value of cash. The Journal of Finance, 61(4), 1957-1990. Link
|
How to find stock i’s benchmark return for Cash-Value Regression
|
CC BY-SA 4.0
| null |
2023-05-18T12:42:19.383
|
2023-05-18T12:42:19.383
| null | null |
52116
|
[
"fama-french",
"cash"
] |
75578
|
2
| null |
75568
|
4
| null |
Your attempt is correct.
The quadratic variation for a Poisson process is:
$$[N]_t=\lim_{\sup(t_{i+1}-t_i)\rightarrow0}\sum_{i:t_i\leq t}(N_{t_{i+1}}-N_{t_i})^2$$
for some partition $\Pi(t)=0\leq t_0\leq\dots\leq t_n\leq t$ of the segment $[0,t]$. Simply pick the partition such that the jump times $\tau_1\leq\dots\leq\tau_k$ of $N$ between $0$ and $t$ are included in $\Pi(t)$ then given $N_{\tau_j}-N_{\tau_j^-}=\Delta N_{\tau_j}$:
$$[N]_t=\sum_{j:\tau_j\leq t}\Delta N_{\tau_j}^2$$
But $\Delta N_{\tau_j}$ is always equal to 1, and so is $\Delta N_{\tau_j}^2$, therefore the quadratic variation of the Poisson process jumps by $1$ whenever $N$ jumps. Consequently:
$$\textrm{d}[N]_t=\textrm{d} N_t$$
| null |
CC BY-SA 4.0
| null |
2023-05-18T14:12:20.367
|
2023-05-18T14:12:20.367
| null | null |
20454
| null |
75579
|
1
| null | null |
2
|
36
|
Consider the following asset pricing setting for a perpetual defaultable coupon bond with price $P(V,c)$, where $V$ is the value of the underlying asset and $c$ is a poisson payment that occurs with probability $\lambda$.
Both $V$ and $c$ evolve as geometric Brownian motions, potentially correlated with:
$$\frac{dV_t}{V_t} = \mu_V dt + \sigma_V d Z^V_t$$
$$\frac{dc_t}{c_t} = \mu_c dt + \sigma_c d Z^c_t$$
and $Corr(d Z^V_t, d Z^c_t) = \rho dt$.
I believe the pricing equation for the bond looks like:
$$\frac{\sigma_V^2}{2}V^2 P_{VV} + \rho \sigma_V \sigma_c V c P_{Vc} + \frac{\sigma_c^2}{2}c^2 P_{cc} + \mu_V V P_V + \mu_c c P_c + \lambda c= rP$$
I'm guessing that the solution for this equation is:
$$P = A_0 + A_1 V^{\gamma_1} + A_2 V^{\gamma_2} + A_3 c^{\gamma_3} + A_4 c^{\gamma_4}$$
Is the above correct?
|
Pricing equation with two correlated states
|
CC BY-SA 4.0
| null |
2023-05-18T14:49:12.040
|
2023-05-18T15:01:49.267
|
2023-05-18T15:01:49.267
|
54556
|
54556
|
[
"brownian-motion",
"asset-pricing",
"geometric-brownian",
"pde"
] |
75580
|
1
| null | null |
4
|
109
|
I recently saw someone write, on a generally non-technical platform, that the Black-Merton-Scholes vanilla option price is the first term of an expansion of the price of a vanilla option.
I get that in the context of stochastic volatility models by making use of the Hull and White mixing formula. And thus also for the specific case of stochastic volatility with (Poisson) jumps in the underlying asset.
For more general processes, can someone derive or point to a paper where it is derived that the BSM price is indeed the first term of price expansion? I would very much like to see what the general series expansion looks like.
This question was prompted by this comment:
[](https://i.stack.imgur.com/FixKc.png)
which reacted to the perceived suggestion that the BSM model is useless.
|
BS price as the first term of option price expansion
|
CC BY-SA 4.0
| null |
2023-05-18T15:51:27.520
|
2023-05-19T11:21:26.127
|
2023-05-18T18:06:16.380
|
848
|
65759
|
[
"black-scholes",
"european-options",
"volatility-surface"
] |
75584
|
1
|
75599
| null |
0
|
56
|
I'm playing around with QuantLib and trying to price an interest rate cap using HW 1F model.
```
import QuantLib as ql
sigma = 0.35
a = 0.1
today = ql.Date(18, ql.May, 2023)
ql.Settings.instance().setEvaluationDate(today)
day_count = ql.ActualActual(ql.ActualActual.ISDA)
forward_rate = 0.075
forward_curve = ql.FlatForward(today, ql.QuoteHandle(ql.SimpleQuote(forward_rate)), day_count)
initialTermStructure = ql.YieldTermStructureHandle(forward_curve)
calendar = ql.NullCalendar()
convention = ql.ModifiedFollowing
index = ql.IborIndex('custom index', ql.Period('3m'), 0, ql.USDCurrency(),
calendar, convention, True, day_count)
start_date = ql.Date(18, ql.May, 2023)
end_date = ql.Date(18, ql.May, 2024)
period = ql.Period(3, ql.Months)
schedule = ql.Schedule(start_date, end_date, period, calendar, convention, convention, ql.DateGeneration.Forward, False)
nominal = 1000000
strike = 0.07
index_leg = ql.IborLeg([nominal], schedule, index)
cap = ql.Cap(index_leg, [strike])
model = ql.HullWhite(initialTermStructure, a, sigma)
engine = ql.AnalyticCapFloorEngine(model, initialTermStructure)
cap.setPricingEngine(engine)
cap.NPV()
```
It seems like I'm doing something the wrong way
>
null term structure set to this instance of custom index3M
Actual/Actual (ISDA)
How can I troubleshoot this error?
|
QuantLib: null term structure set to this instance of index
|
CC BY-SA 4.0
| null |
2023-05-19T09:31:19.830
|
2023-05-20T17:32:44.430
|
2023-05-20T12:30:35.540
|
27119
|
27119
|
[
"programming",
"quantlib",
"hullwhite"
] |
75585
|
2
| null |
75580
|
1
| null |
The closest thing I can find to a general expansion of an option price in terms of BS price + correction terms is the following paper by Merino and Vives. It basically shows this using three methods, namely Ito calculus, Functional Ito calculus, and Malliavin calculus. The 'stochastic volatility' in the title of the paper actually includes local stochastic volatility models as well.
[Merino and Vives, A generic decomposition formula for pricing vanilla options under stochastic volatility](https://downloads.hindawi.com/archive/2015/103647.pdf)
| null |
CC BY-SA 4.0
| null |
2023-05-19T11:21:26.127
|
2023-05-19T11:21:26.127
| null | null |
65759
| null |
75587
|
1
| null | null |
0
|
23
|
I am trying to understand how transfer pricing is accounted for for margined trades.
The treasury department within a bank will provide trading desks funding at some base rate plus a transfer pricing spread: the cost of funding.
For an uncollateralised derivative, future payments should be discounted at this funding cost.
For a margined derivative, future payments are discounted at the collateral rate, which is usually the base rate. However, a trading desk will need to fund any margin calls, and presumably this funding is obtained from treasury at a rate of `base + transfer-pricing`.
Let's assume that the future value of a derivative contract is `-$100`, with a present value of `-$95` discounted at the base rate. At t=0, the holder must post `$95` to the contract counterparty.
Given the desk will be funding at base + tp, does this mean that the desk's net cash position at termination of the contract will be `-($100 + transfer-pricing-cost)`? This seems like a logical conclusion, however it is contrary my company's (a financial institution) internal company material indicating that margined trades cost/earn the base rate, in which case I am puzzled as to 'where the transfer pricing goes'.
|
transfer pricing a cost when posting collateral
|
CC BY-SA 4.0
| null |
2023-05-19T20:28:32.137
|
2023-05-19T20:28:32.137
| null | null |
29211
|
[
"margin",
"collateral"
] |
75588
|
2
| null |
49093
|
4
| null |
You would actually sell the option to yourself if you were long a Call on EUR and short a Put on USD.
FX is quoted as CCY1CCY2 (e.g. EURUSD) where it shows the amount of CCY2 needed to buy/sell 1 unit of CCY1. In terms of options, the standard Black Scholes model (called Garman Kohlhagen in FX) has Notional in CCY1, but price in CCY2. The call and put position refers to CCY1. A EUR Call is equal to a USD Put and you do not swap the direction from long to short (that would be at the other side of the market).
Hence, if you want a Call on EUR you just need to buy a Call option. You can see in the screenshot below that a Long EUR Put is identical to a Long USD Call (or Long USD Put is equal to Long EUR Call).
[](https://i.stack.imgur.com/KJxro.png)
You can look [here](https://quant.stackexchange.com/a/63552/54838) if you know some basic coding and are interested to compute the values yourself.
| null |
CC BY-SA 4.0
| null |
2023-05-19T21:54:19.347
|
2023-05-19T21:54:19.347
| null | null |
54838
| null |
75591
|
1
| null | null |
-2
|
66
|
I'm looking for a library like BackTrader or platform that is written in C++ for backtesting.
Is C++ a good choice to write a backtester and are any available?
|
Is C++ suitable for writing backtests
|
CC BY-SA 4.0
| null |
2023-05-20T08:24:05.353
|
2023-05-20T14:55:39.517
|
2023-05-20T14:52:51.863
|
848
|
67486
|
[
"backtesting"
] |
75592
|
1
|
75597
| null |
1
|
313
|
I encountered this phrase in the textbook by Hans Schumacher
>
For the purposes of this textbook, a mathematical model for a financial market consists of a specification of the joint evolution of prices of a number of given assets. Further information that may be important in practice, such as trading restrictions,
are abstracted away by the assumption of perfect liquidity.
Could anyone explain how assuming perfect liquidity would resolve issues like trading restrictions?
|
Assuming perfect liquidity
|
CC BY-SA 4.0
| null |
2023-05-20T11:10:39.063
|
2023-05-20T22:25:54.633
| null | null |
65018
|
[
"financial-markets",
"market-model"
] |
75593
|
2
| null |
75591
|
2
| null |
You can write a Backtester in any language you want and quite a few people did so in C++ and made the source code available:
[https://github.com/topics/backtesting?l=c%2B%2B](https://github.com/topics/backtesting?l=c%2B%2B)
A reason to perform backtests with code written in C++ is that it can achieve very good performance.
| null |
CC BY-SA 4.0
| null |
2023-05-20T11:52:56.390
|
2023-05-20T14:55:39.517
|
2023-05-20T14:55:39.517
|
848
|
848
| null |
75594
|
1
| null | null |
0
|
60
|
I am building a VaR model (in Excel using the variance-covariance method) for a portfolio containing stocks, bonds, and an ETF. Additionally, there is a put option (out of the money) that is there to hedge the downside risk of my position in the ETF. The ETF tracks the MSCI index.
My question is, how do I implement the put option in my VaR model?
|
Value at risk (portfolio with stocks, bonds, and options)
|
CC BY-SA 4.0
| null |
2023-05-20T12:39:21.677
|
2023-05-20T16:38:02.347
|
2023-05-20T13:29:41.730
|
26556
|
67488
|
[
"value-at-risk"
] |
75595
|
2
| null |
75594
|
2
| null |
You have a matrix that for all your market factors contains their volatilities and pairwise correlations, probably calculated from historcal market data. You assume that they are normally distributed. You calculate VaR to some confidence interval, often 99%.
As long as all the instruments in your portfolio are linear within your chosen confidence interval, you can use matrix multiplication, which is easy in Excel: the vector of your factor sensitivities times the matrix times the transpose(vector of factor sensitivities) times normsinv(confidence interval) times sqrt(time horizon) and few other things.
In your example, if the option is so far our of the money that it is worthless on your confidence interval, you just assume that it's worthless and effectively exclude it from the VaR calculation. Likewise, if the option were very far in the money that you could assume that you hold the underlying, then you could indeed assume that you hold the underlying. But, conversely, if the option is out of the money now, but would be in the money if the underlying moved, e.g., 2 historical standard deviations, in other words, the value of the option is materially non-linear within your confidence interval, then you can't use matrix multiplication anymore. You have to use a Monte-Carlo simulation (MC). It is described well, for example, in [Value at Risk: The New Benchmark for Managing Financial Risk by Philippe Jorion](https://rads.stackoverflow.com/amzn/click/com/0071464956) $\S12$. For MC, you will need to estimate the change in the value of option under thousands of risk scenarions. This is likely to be computationally intensive, so you may want to look into shortcuts where you reprice the option under a few scenarios, and linearly iterpolate between them.
While it's possible to implement MC in Excel, it's really painful, and I urge you to use another tool more suitable for such calculations.
I also urge you to look at [expected shortfall (ES)](https://en.wikipedia.org/wiki/Expected_shortfall), rather than VaR.
Another approach you might try instead of MC is not to assume that a small number of parameters summarizes the dynamics of your historical market data, not to use any covrance matrix, but rather to calculate how the value of your portfolio would change if the market data changed like it did in your historial data; and use the worst cases for VaR or ES.
You should also consider the impact of additional historical or hypothetical market stress scenarios.
| null |
CC BY-SA 4.0
| null |
2023-05-20T13:26:59.120
|
2023-05-20T16:38:02.347
|
2023-05-20T16:38:02.347
|
36636
|
36636
| null |
75597
|
2
| null |
75592
|
5
| null |
A trading restriction could mean that you cannot short certain instruments, or that you cannot execute above/below certain volumes. For example in practice you cannot trade fractional numbers of a stock, let alone irrational numbers. Perfect liquidity basically means none of these restrictions exist, it is an idealization to facilitate modelling.
| null |
CC BY-SA 4.0
| null |
2023-05-20T15:18:15.677
|
2023-05-20T15:18:15.677
| null | null |
65759
| null |
75598
|
1
| null | null |
0
|
76
|
I'm sure there's an obvious answer to this question so apologies. But reading through [this](https://web.math.ku.dk/%7Erolf/SABR.pdf) seminal paper on the SABR model, the author's provide an explicit formula for the (normal) implied vol of a $\tau$-expiry, strike $K$ option, $\sigma_N(K)$ (ibid. equations A67a,b). For the case $\beta=0$ (i.e. for a Bachelier distribution of the forward $f$) this formula simplifies to
\begin{equation}
\sigma_N(K)=\alpha\frac{\zeta}{D(\zeta)}\left(1+\frac{2-3\rho^2}{24}\nu^2\tau\right)\tag{1}
\end{equation}
where $\alpha,\nu,\rho$ are the remaining SABR parameters, \begin{equation}\zeta=\frac{\nu}{\alpha}(f-K) \tag{2}\end{equation} and \begin{equation}D(\zeta)=\ln\left(\frac{\sqrt{1-2\rho\zeta+\zeta^2}-\rho+\zeta}{1-\rho}\right).\end{equation} Now, the authors "further simplify" $(1)$ by recasting $\zeta$ (through a series expansion of the term $(f-K)$) as \begin{equation}\zeta=\frac{\nu}{\alpha}\sqrt{fK}\ln\left(\frac{f}{K}\right)\tag{3}\end{equation} with $D(\zeta)$ as before (ibid. equations A69a,b). It escapes me as to exactly why this is done: is it purely to eliminate $K$ and $f$ having opposite signs and setting $K \in (0,\infty]$? Not only does taking $\zeta$ as given by $(2)$ allow for smile calibration to negative strikes, taking it in the form $(3)$ (implemented on a computer) gives poor fitting for deep OTM strikes and nasty behavior for strikes close to zero.
|
Hagan's implied vol formula for Normal SABR
|
CC BY-SA 4.0
| null |
2023-05-20T17:15:04.090
|
2023-05-20T17:54:06.183
|
2023-05-20T17:54:06.183
|
35980
|
35980
|
[
"sabr"
] |
75599
|
2
| null |
75584
|
1
| null |
The index requires a fwd curve to calculate the ibor fwds. `index = ql.IborIndex('custom index', ql.Period('3m'), 0, ql.USDCurrency(),calendar, convention, True, day_count,initialTermStructure)` will work.
| null |
CC BY-SA 4.0
| null |
2023-05-20T17:32:44.430
|
2023-05-20T17:32:44.430
| null | null |
35980
| null |
75600
|
1
| null | null |
2
|
70
|
Ok, this is a bit of a long read, so be warned..
I am currently learning about the so called "Stochastic collocation" technique which seem to have been quite popular during recent years for different applications in finance.
To get some feeling for the method I have made an attempt to implement the method used in the article [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2529684](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2529684).
Basically, the problem is that the well known Hagan analytical formulas to produce Black volatilities for options in the SABR model becomes more inaccurate for larger maturities. The approximative "Hagan" volatility is then, in some cases, no longer close to the true theoretical black volatilities implied by the SABR model.
The main idea, very simply speaking, of the article is to project the good part of the distribution of a "bad" variable (the one implied by Hagans formulas here) onto a "good" one with a simple distribution (the normal distribution in this case), which is then used in a clever way to extrapolate and replace the bad sections of the bad variable.
The authors claim that they this way create an underlying density implying "arbitrage free implied volatilities".
I think I understand everything in the article and I have tried to implement the method.
For standard cases I seem to be able to replicate the results in the article.
However, in some more extreme cases for larger maturities and smaller strikes, I cannot calculate any implied volatility.
Unless I have some rather bad numerical issues in my implementation (which is certainly a possibility), the problem seems to be in the interpolated density using the stochastic collocation method. It gives me call option prices that are not attainable in the Black model.
The reason for the problem is arbitrage, more precisely calendar arbitrage: I get call option price which at time 0 is greater than the option price at maturity for small strikes in some cases.
The produced density in the article seems free of butterfly arbitrage though.
However, the calendar arbitrage makes it impossible to create the arbitrage free volatility since it does not even exist in this case because of precisely arbitrage.
Sure, the problem only appears for longer maturities and small strikes, but since the whole point of the article was to produce an arbitrage free volatility in the problematic situations, I am a bit confused.
Maybe I am missing something obvious here, but I see no proof or motivation as to why the produced density of the underlying should be free of calendar arbitrage?
(I have also implemented the collocation method so that the expected value of the underlying at maturity, given the collocated density, match the forward value)
So my question is simply if the density calculated in the article actually always truly both lacks butterfly and calendar arbitrage and I therefore made some mistake/have some numerical issues, or if it actually sometimes has arbitrage?
I can think of a few ideas on how to fix this, if it really isn't a simple mistake by me.
Knowing the theoretical (unconditional) absorption probability at zero of the SABR distribution would help, but I don't think this is known, right?
Or do there exist some good approximations of this?
Or maybe there are some well known procedure when it comes to "removing" calendar arbitrage in such situations in some "natural" way?
(This situation also sometimes occur during Monte Carlo simulations, I have noticed, when you have some numerical errors causing some slight calendar arbitrage in the sample, making it impossible to calculate the implied vol for some strikes)
Edit:
After some more investigations, I realized I made an error.
A part of the option pricing procedure consist in inverting the main interpolation function used. This is well defined on the "good part" of the function, but outside that part the inverse is typically not uniquely defined. In some cases my solver jumped to the wrong value of the inverse causing the option prices to become invalid.
I Will fix this and see if it solves the problem (which I have no doubt it will)
I still don't know the answer to my original question though, as to why the method produces option prices free of calendar arbitrage.
Probably there is some obvious reason why, but I don't see it at the moment.
|
SABR, Stochastic collocation and calendar arbitrage
|
CC BY-SA 4.0
| null |
2023-05-20T20:01:41.010
|
2023-05-22T20:39:14.557
|
2023-05-22T20:39:14.557
|
47714
|
47714
|
[
"volatility",
"arbitrage",
"sabr"
] |
75604
|
1
| null | null |
2
|
119
|
For example, what does "outright" mean in outright OIS swap? Another similar question, what does "USD fedfund is marked as outright" mean? Thank you
|
what does "outright" mean in rates world?
|
CC BY-SA 4.0
| null |
2023-05-20T22:44:47.250
|
2023-05-24T10:30:31.210
|
2023-05-21T10:03:54.473
|
20454
|
51700
|
[
"swaps",
"terminology"
] |
75608
|
1
| null | null |
0
|
68
|
I've been analyzing Tesla stock American options data and have observed an interesting pattern that I'd appreciate some help understanding.
For this analysis, I obtained the Implied Volatilities (IVs) by reversing the Binomial Option Pricing model specific to American options and used the current market price derived via PCP at atm.
Unlike European options, where we know that the In-The-Money (ITM) Implied Volatility (IV) of the put side equals the Out-Of-The-Money (OTM) IV of the call side and vice versa, American options seem to behave differently.
In the case of American equity options, it appears to be such that:
IV of ITM Call side > OTM Put side
IV of ITM Put side > OTM Call side
For clarity, I've attached an image plot that illustrates this:
[](https://i.stack.imgur.com/xxXeg.png)
While I'm aware of the fact that Put-Call Parity does not hold in American options, causing implied IVs for calls and puts to diverge, I'm struggling to understand the mathematical reasoning that leads to ITM options generally being pricier.
Could someone explain why this might be the case? Any insights into the mathematics or logic behind this observed pattern would be greatly appreciated.
Thank you.
|
Implied Volatility Discrepancy in American Options - Mathematical Reasoning?
|
CC BY-SA 4.0
| null |
2023-05-21T13:57:21.260
|
2023-05-27T17:28:12.457
|
2023-05-27T17:28:12.457
|
848
|
41600
|
[
"options",
"implied-volatility",
"american-options"
] |
75611
|
1
|
75615
| null |
2
|
60
|
Many year ago, I worked on the pricing of IR products (Floating rate swap, CMS swap, Cap, Floor,...)
Libor rates are now replaced by SOFR rate. I would like to know
- What are the new IR products (Floating rate swap, CMS swap, Cap, Floor,...) after the replacement of Libor rates by SOFR rate? And how are these new IR products priced?
I would be grateful if you could give me some good references (papers, books,...).
|
What are the quantitative models for modelling the SOFR rate, the IR products when Libor rates end
|
CC BY-SA 4.0
| null |
2023-05-21T18:24:35.700
|
2023-05-21T20:41:43.430
|
2023-05-21T20:21:26.883
|
24336
|
24336
|
[
"interest-rates",
"pricing",
"libor",
"sofr",
"modelling"
] |
75613
|
1
| null | null |
0
|
34
|
In the binomial tree options pricing literature, I see frequent reference to the definition that
$$
u = e^{\sigma \sqrt{n/t}}
$$
I think I understand the model, but how do we derive this, i.e. how do I fit the model to data? I've tried to derive it myself but get a different answer, namely
$$
u = \exp(\sigma / 2\sqrt{np(1-p)})
$$
where $p$ is the risk neutral probability. The paper "A Synthesis of Binomial Option Pricing Models for Lognormally Distributed Assets" even argues that the CRR derivations admit arbitrage for discrete time periods.
|
How to fit (find $u$) in the binomial options pricing model?
|
CC BY-SA 4.0
| null |
2023-05-21T19:54:22.423
|
2023-05-21T19:54:22.423
| null | null |
56943
|
[
"options",
"binomial-tree"
] |
75614
|
1
| null | null |
0
|
42
|
I'm trying to improve my understanding of valuation under collateralisation.
One point that is made within multiple sources is for an uncollateralised derivative, how a future cashflow is equivalent to a loan made to that counterparty, and how the holder of the derivative must take that loan.
I just can't see this equivalence. If we were to 'convert' or 'realise' a future cashflow to a present cashflow (to, for example, pay salaries or some other cost), then we would have to go out and borrow some amount, with the repayment of that principle plus interest to be made by that future cashflow (the basic present value arbitrage reasoning). So we discount the future value by whatever the cost of funding is. Nowhere is there a relationship with the derivative counterparty. Further, I can't see how there is a mandatory requirement to take this loan.
Is there a simple perspective that I have missed that would resolve my confusion?
Some examples:
[How to hedge the fixed leg of a swap contract?](https://quant.stackexchange.com/questions/7361/how-to-hedge-the-fixed-leg-of-a-swap-contract/7406#7406)
>
Assume the swap is not collateralized, then you have to fund all
future values. The net payments in the swap are then payed by
corresponding the maturing of the corresponding funding contracts
(which you created dynamically in your hedge).
(why do we have to fund anything?)
[Collateral replication argument](https://quant.stackexchange.com/questions/35389/collateral-replication-argument/35391#35391)
>
Let us consider the situation where the firm A has a positive present
value (PV) in the contract with the firm B with high credit quality.
From the view point of the firm A, it is equivalent to providing a
loan to the counterparty B with the principal equal to its PV. Since
the firm A has to wait for the payment from the firm B until the
maturity of the contract, it is clear that A has to finance its loan
and hence the funding cost should be reflected in the pricing of the
contract.
(again, why does firm A has to finance anything)
[https://vdoc.pub/documents/the-xva-of-financial-derivatives-cva-dva-and-fva-explained-ruskbvi6ess0](https://vdoc.pub/documents/the-xva-of-financial-derivatives-cva-dva-and-fva-explained-ruskbvi6ess0)
>
they are effectively unsecured borrowing and lending with the counterparty
[https://www.pwc.com.au/pdf/xva-explained.pdf](https://www.pwc.com.au/pdf/xva-explained.pdf)
>
Similarly, a funding cost arises for the bank when a derivative has a
positive market value. The purchase of an ‘in the money’ or asset
position derivative requires the bank to pay cash. The incremental
cost of funding this purchase can also be seen as equivalent to the
cost of the bank raising funding.
I can understand why a bank would need to fund if they were to purchase an in the money derivative, but why does there need to be any funding for a derivative that starts with zero valuation and then goes into the money?
|
future cashflow loan equivalence
|
CC BY-SA 4.0
| null |
2023-05-21T20:21:24.230
|
2023-05-25T08:02:03.603
|
2023-05-21T20:34:51.030
|
29211
|
29211
|
[
"derivatives",
"fundamentals",
"collateral"
] |
75615
|
2
| null |
75611
|
3
| null |
The reference you want is [https://www.newyorkfed.org/arrc](https://www.newyorkfed.org/arrc)
The conversion to SOFR from LIBOR was well worked and well publicised, concerning the transition issues and what were the ultimate recommendations.
For FRNs, a variety of possibilities existed. The recommendations were a lookback period of 'x' days with observation shift. This means that coupon periods on FRNs rely on the published, compounded SOFR fixings within a period, but lagged by 'x' days so that cashflows can be scheduled and settled.
In the IRS space the market now trades SOFR IRS on an Annual/Annual Act360/Act360 structure with no lookback but with a payment lag of 2 business days to ensure operational cashflow settlement.
| null |
CC BY-SA 4.0
| null |
2023-05-21T20:41:43.430
|
2023-05-21T20:41:43.430
| null | null |
29443
| null |
75616
|
1
| null | null |
0
|
52
|
Upvar and downvar swaps are examples of conditional variance swaps.
For example a downvar swap only accrues variance for day $i$ (the day whose logarithmic return is $\log S_i/S_{i-1}$) if the underlying $S$ is below a threshold level $H$. This condition can be definded in 3 ways:
- $S_{i-1} \le H$,
- $S_{i} \le H$,
- Both $S_{i-1} \le H$ and $S_{i} \le H$.
Can someone please explain which convention would the investor choose while trading upvar swap and downvar swap among $i$, $i-1$ or $i \text{ and } i-1$ and correspondingly which spot convention is better for dealers
|
Upvar downvar swap $i, i-1, i\land i-1$ conventions
|
CC BY-SA 4.0
| null |
2023-05-22T07:17:18.970
|
2023-05-22T09:54:54.250
|
2023-05-22T09:54:54.250
|
16148
|
67507
|
[
"variance-swap"
] |
75618
|
2
| null |
64051
|
4
| null |
I think the simplest way to do this in Python is with [Databento](https://databento.com/)'s API.
```
import databento as db
client = db.Historical()
data = client.timeseries.get_range(
dataset='GLBX.MDP3',
schema='definition',
start='2022-10-10',
end='2022-10-10',
)
df = data.to_df()
print(df[['symbol', 'expiration']])
```
This gets you the expirations for all symbols (617,090 in total) that were active on a specific historical date (2022-10-10).
```
symbol expiration
ts_recv
2022-10-10 00:00:00+00:00 OSXG3 P9850 2022-12-22 19:30:00+00:00
2022-10-10 00:00:00+00:00 AAOJ4 C5950 2024-04-30 18:30:00+00:00
2022-10-10 00:00:00+00:00 7FV3-7FG4 2023-10-11 15:30:00+00:00
2022-10-10 00:00:00+00:00 WAYZ2 P-150 2022-11-18 19:30:00+00:00
2022-10-10 00:00:00+00:00 AHMG4 2024-02-01 04:59:00+00:00
... ... ...
2022-10-10 23:58:50.142534786+00:00 UD:1V: BX 1011861394 2022-10-11 20:00:00+00:00
2022-10-10 23:59:03.453690873+00:00 UD:1V: VT 1011861395 2022-10-14 21:59:00+00:00
2022-10-10 23:59:05.521727212+00:00 UD:EN: GN 1011861396 2022-10-14 20:00:00+00:00
2022-10-10 23:59:18.036952193+00:00 UD:1V: BX 1011861397 2022-10-11 20:00:00+00:00
2022-10-10 23:59:51.703175001+00:00 UD:1V: BX 1011861398 2022-10-11 20:00:00+00:00
[617090 rows x 2 columns]
```
| null |
CC BY-SA 4.0
| null |
2023-05-22T12:00:04.683
|
2023-05-22T12:00:04.683
| null | null |
4660
| null |
75619
|
2
| null |
4181
|
0
| null |
I'm using Skender Stock Indicators ([https://www.nuget.org/packages/Skender.Stock.Indicators](https://www.nuget.org/packages/Skender.Stock.Indicators)).
Built some back-tests for my experimental algorithms using 1 minute OLHC data downloaded from CryptoDataDownload ([https://www.cryptodatadownload.com/data/gemini/](https://www.cryptodatadownload.com/data/gemini/)). Note: free account is needed to download!
| null |
CC BY-SA 4.0
| null |
2023-05-22T20:35:00.937
|
2023-05-22T20:35:00.937
| null | null |
67514
| null |
75620
|
1
| null | null |
0
|
50
|
(Note there are similar questions, with different focuses at this forum, but my focus is more on the general concept, if any, about backtesting (for stocks) and sources of information where I can go to, if any, sorry if any duplicates)
I have started to use a backtesting system for the local stock market (Taiwan), I do still have some details that I need to understand more about:
Market data: daily: open, close, adjusted open, adjusted close, benchmark (market), etc., as well as monthly and quarterly data.
Parameters (input of backtesting): limit of number of stocks, position, stop loss %, take profit %, position limit %, trade at price (open, close, custom such as (adjusted open + adjusted close)/2)
Metrics: (output of backtesting): these values seem be based on ffn: rf (risk free rate%), total_return (not sure what), cagr (not sure whether this should be used as the score to evaluate a strategy or not), calmar (Calmar ratio %), daily_sharpe (Sharpe ratio), daily_sortino (Sortino ratio), daily_mean (annual return %?), daily_skew (skew), daily_kurt (Kurt), many of them with different timeframes: daily, monthly, quarterly, yearly, etc.. How are all these calculated? Which version of returns should be used: arithmetic average or geomretical average? How to calculate alphas, beta, etc.?
Portfolio of strategies: how to backtest a portfolio of strategies on top of the backtesting results of the component strategies? If I have 10 strategies from which to make a portfolio, even if with the equal weights assumption, there are 1023 possible combinations, how to evaluate and compare the performance of all of them? I myself use this: average(z-score of annualized return, z-score of annualized sharpe ratio, z-score of sortino ratio, z-score of calmar ratio) (z-score means the rank of the metric of one combination among all the possible combinations), but not sure if this makese sense. How about optimal weights of the portfolio from "efficient frontier" (any tool?)?
The stock market strategies are long position only strategies, do we need to add short market-level position to compensate them?
If we do not use AI/machine learning yet and do not want to split the data for in-sample/out-of-sample validation, is there a tool we can test the strategy by comparing yearly performances over time?
Please point me to sources, if any, with general backtesting algorthm and data structure (especailly for stock market), thanks in advance.
|
Any document about general backtesting algorithm and data structure
|
CC BY-SA 4.0
| null |
2023-05-23T02:37:22.300
|
2023-05-23T02:37:22.300
| null | null |
67396
|
[
"quant-trading-strategies",
"algorithmic-trading",
"backtesting",
"factor-investing"
] |
75621
|
1
|
75625
| null |
0
|
114
|
I've googled and read many articles about "ROIC" and "Invested Capital", but I'm still confused about how to calculate them.
The best explanation I've seen so far is:
- Invested Capital (from CFI)
- Invested Capital Formula (from EDUCBA)
- Invested Capital Formula (from WallStreetMojo)
There are at least two ways to calculate "Invested Capital":
- financing approach: $Invested Capital = Total Debt \& Leases + Total Equity \& Equity Equivalents + NonOperating Cash \& Investments$
- operating approach: $Invested Capital = Net Working Capital + PP\&E + Goodwill \& Intangibles$
I've tried to apply those approaches to [AAPL's 2022 fiscal report (SEC Form 10-K)](https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019322000108/aapl-20220924.htm#ief5efb7a728d4285b6b4af1e880101bc_73).
Financial approach:
```
A) Commercial paper: 9,982
B) Term debt (Current liabilities): 11,128
C) Term debt (Non-current liabilities): 98,959
D) Total shareholders’ equity: 50,672
E) Cash used in investing activities: (22,354)
F) Cash used in financing activities: (110,749)
Invested Capital = A+B+C+D+E+F = 37,638
```
operating approach:
```
A) Total current assets: 135,405
B) Total current liabilities: 153,982
C) Property, plant and equipment, net: 42,117
D) Goodwill & Intangibles: N/A
Invested Capital = A-B+C+D = 23,540
```
The results from the two approaches are not equal.
## Questions
- Did I calculate them wrong?
- Are the results from the two approaches always equal to each other?
- If false, how to choose which approach is for ROIC?
- Can I apply the financial approach to all companies with just those 6 values/facts, and calculate and compare the ROIC for different companies?
|
Invested Capital: operating approach vs financing approach
|
CC BY-SA 4.0
| null |
2023-05-23T06:40:26.727
|
2023-05-25T11:20:49.910
| null | null |
67019
|
[
"investment",
"accounting"
] |
75622
|
1
| null | null |
3
|
60
|
In this paper [Elliott, R., van der Hoek, J. and Malcolm, W. (2005) Pairs Trading.](https://www.scirp.org/(S(351jmbntvnsjt1aadkposzje))/reference/ReferencesPapers.aspx?ReferenceID=1687852), the spread (state process) is assumed to follow a mean reverting process $x_{k+1}-x_k=(a-bx_k)\tau+\sigma\sqrt{\tau}\varepsilon_{k+1}$, and the observed spread is assumed to follow a (observation) process $y_k=x_k+Dw_k$.
In section 2.3, the authors have mensioned that "if $y_k>\hat{x}_{k|k-1}=E[x_k|\sigma(y_0,\cdots,y_{k-1})]$ then the spread is regarded as too large, and the trader could take a long position in the spread portfolio and profit when a correction occurs."
I want to ask why this trading strategy is reasonable. If $y_k>\hat{x}_{k|k-1}=E[x_k|\sigma(y_0,\cdots,y_{k-1})]$, I think that the predicted spread at time $k$ is smaller than oberserved spead, so the price is high right now and it will go down eventually since it is assumed to be mean reverting, so maybe I will short the position.
Appreciate any suggestion and comments.
|
Why long a position when the acutal price is higher than the predicted price? (Kalman filter for pairs trading)
|
CC BY-SA 4.0
| null |
2023-05-23T07:13:53.603
|
2023-05-23T07:37:57.567
|
2023-05-23T07:37:57.567
|
67522
|
67522
|
[
"pairs-trading",
"kalman"
] |
75623
|
1
| null | null |
1
|
56
|
I try to compute the local volatility in python in both formula, i.e. in terms of call price surface and total variance surface.
However, I get 2 different values. What did I do wrong?
```
import numpy as np
from scipy.stats import norm
today=44935
date1=44956
t1 = (date1-today)/365
a1=0.00403326433586633
b1=0.0393291646363571
sigma1=0.103678184440224
rho1=-0.147025189597259
m1=0.0154107775237231
date2=44957
t2 = (date2-today)/365
a2=0.00423386330856259
b2=0.0402496625734392
sigma2=0.106395807290502
rho2=-0.150124616359447
m2=0.0161557212111055
def rawSVI(logMoneyness,a,b,sigma,rho,m):
return a + b*(rho*(logMoneyness-m) + ((logMoneyness-m)**2 + sigma**2)**0.5)
def bsCall(s,k,r,t,vol):
d1 = (np.log(s/k) + (r+0.5*vol**2)*t)/vol/ t**0.5
d2 = d1 - vol*t**0.5
return s*norm.cdf(d1) - k*np.exp(-r*t)*norm.cdf(d2)
w = rawSVI(0,a1,b1,sigma1,rho1,m1)
vol = np.sqrt(w/t1)
s = 1
r = 0
k = 0.65
f = s*np.exp(r*t1)
y = np.log(k/f)
#method 1
dk = 0.01
vol = np.sqrt(rawSVI(np.log(k/f),a1,b1,sigma1,rho1,m1)/t1)
call = bsCall(s,k,r,t1,vol)
k_plus = k*(1+dk)
vol_plus = np.sqrt(rawSVI(np.log(k_plus/f),a1,b1,sigma1,rho1,m1)/t1)
call_plus = bsCall(s,k_plus,r,t1,vol_plus)
k_minus = k*(1-dk)
vol_minus = np.sqrt(rawSVI(np.log(k_minus/f),a1,b1,sigma1,rho1,m1)/t1)
call_minus = bsCall(s,k_minus,r,t1,vol_minus)
call_dkdk = (call_plus - 2*call + call_minus)/(k_plus - k) / (k - k_minus)
vol_tPlus = np.sqrt(rawSVI(np.log(k/f),a2,b2,sigma2,rho2,m2)/t2)
call_tPlus = bsCall(s,k,r,t2,vol_tPlus)
call_dt = (call_tPlus - call)/(t2-t1)
localVol_1 = np.sqrt(2*call_dt/k**2/call_dkdk)
print(localVol_1) #0.9224266360763037
#method 2
dy = 0.01
w = rawSVI(y,a1,b1,sigma1,rho1,m1)
w_plus = rawSVI(y*(1+dy),a1,b1,sigma1,rho1,m1)
w_minus = rawSVI(y*(1-dy),a1,b1,sigma1,rho1,m1)
w_dy = (w_plus - w_minus)/(2*y*dy)
w_dydy = (w_plus - 2*w + w_minus)/(y*dy)**2
w_tPlus = rawSVI(y,a2,b2,sigma2,rho2,m2)
w_dt = (w_tPlus - w)/(t2-t1)
localVol_2 = np.sqrt(w_dt/(1-y/w*w_dy+0.25*(-0.25-1/w+y**2/w**2)*w_dy**2+0.5*w_dydy))
print(localVol_2) #0.8991455521477574
```
|
local volatility formula
|
CC BY-SA 4.0
| null |
2023-05-23T09:53:48.853
|
2023-05-23T09:53:48.853
| null | null |
33650
|
[
"local-volatility"
] |
75624
|
1
|
75657
| null |
2
|
80
|
I struggle to understand why my market rates does not match my bootstrap model. So I wonder why the spread is that high between market & model.
```
maturity | market | model
1W | 0.050640 | 0.050626
2W | 0.050670 | 0.050631
3W | 0.050720 | 0.050655
1M | 0.051021 | 0.050916
2M | 0.051391 | 0.051178
3M | 0.051745 | 0.051415
4M | 0.051940 | 0.051493
5M | 0.051980 | 0.051424
6M | 0.051820 | 0.051149
7M | 0.051584 | 0.050821
8M | 0.051310 | 0.050454
9M | 0.050924 | 0.049979
10M | 0.050604 | 0.049582
11M | 0.050121 | 0.049026
12M | 0.049550 | 0.048386
18M | 0.045585 | 0.044806
2Y | 0.042631 | 0.041774
3Y | 0.038952 | 0.038230
4Y | 0.036976 | 0.036321
5Y | 0.035919 | 0.035297
6Y | 0.035350 | 0.034745
7Y | 0.034998 | 0.034403
8Y | 0.034808 | 0.034219
9Y | 0.034738 | 0.034151
10Y | 0.034712 | 0.034125
12Y | 0.034801 | 0.034210
15Y | 0.034923 | 0.034327
20Y | 0.034662 | 0.034075
25Y | 0.033750 | 0.033193
30Y | 0.032826 | 0.032298
40Y | 0.030835 | 0.030369
50Y | 0.028960 | 0.028548
```
My curve only contain swap.
Fixed leg :
- Discounting OIS
- Settlement T+2 Days
- Term 2 Week
- Day Count ACT/360
- Pay Freq Annual
- Bus Adj ModifiedFollowing
- Adjust Accrl and Pay Dates
- Roll Conv Backward (EOM)
- Calc Cal FD
- Pay Delay 2 Business Days
Float Leg
- Day Count ACT/360
- Pay Freq Annual
- Index SOFRRATE Index
- Reset Freq Daily
- Bus Adj ModifiedFollowing
```
self.swaps = {Period("1W"): 0.05064, Period("2W"): 0.05067, Period("3W"): 0.05072, Period("1M"): 0.051021000000000004, Period("2M"): 0.051391, Period("3M"): 0.051745, Period("4M"): 0.05194, Period("5M"): 0.051980000000000005, Period("6M"): 0.051820000000000005, Period("7M"): 0.051584000000000005, Period("8M"): 0.05131, Period("9M"): 0.050924, Period("10M"): 0.050603999999999996, Period("11M"): 0.050121, Period("12M"): 0.049550000000000004, Period("18M"): 0.04558500000000001, Period("2Y"): 0.042630999999999995, Period("3Y"): 0.038952, Period("4Y"): 0.036976, Period("5Y"): 0.035919, Period("6Y"): 0.03535, Period("7Y"): 0.034998, Period("8Y"): 0.034808, Period("9Y"):
0.034738000000000005, Period("10Y"): 0.034712, Period("12Y"): 0.034801, Period("15Y"): 0.034923, Period("20Y"): 0.034662, Period("25Y"): 0.03375, Period("30Y"): 0.032826, Period("40Y"): 0.030834999999999998, Period("50Y"): 0.02896}
```
Below is how I use `OISRateHelper`
```
for tenor, rate in self.swaps.items():
helper = ql.OISRateHelper(2, tenor, ql.QuoteHandle(ql.SimpleQuote(rate)), self.swap_underlying)
rate_helpers.append(helper)
```
And below is how I compare new rates to given rates :
```
self.curve = ql.PiecewiseSplineCubicDiscount(calculation_date, rate_helpers, self.swap_day_count_conv)
yts = ql.YieldTermStructureHandle(self.curve)
# Link index to discount curve
index = index.clone(yts)
# Create engine with yield term structure
engine = ql.DiscountingSwapEngine(yts)
# Check the swaps reprice
print("maturity | market | model")
for tenor, rate in self.swaps.items():
swap = ql.MakeVanillaSwap(tenor,
index, 0.01,
ql.Period('0D'),
fixedLegTenor=ql.Period('2D'),
fixedLegDayCount=self.swap_day_count_conv,
fixedLegCalendar=ql.UnitedStates(ql.UnitedStates.GovernmentBond),
floatingLegCalendar=ql.UnitedStates(ql.UnitedStates.GovernmentBond),
pricingEngine=engine)
print(f" {tenor} | {rate:.6f} | {swap.fairRate():.6f}")
```
Also note that :
```
self.swap_underlying = ql.OvernightIndex("USD Overnight Index", 2, ql.USDCurrency(), ql.UnitedStates(ql.UnitedStates.Settlement), ql.Actual360())
self.swap_day_count_conv = ql.Actual360()
```
Did I miss something? Is the implementation I made correct? Are there any discrepancies in the parameters?
Note
Curve description :
[](https://i.stack.imgur.com/TqCAp.png)
Swap Descripion :
[](https://i.stack.imgur.com/hiOgK.png)
[](https://i.stack.imgur.com/f4pz7.png)
|
BootStrap with quantlib USD SOFR (vs. FIXED RATE) swap curve
|
CC BY-SA 4.0
| null |
2023-05-23T12:23:19.223
|
2023-05-25T08:26:54.563
|
2023-05-23T15:13:54.900
|
42110
|
42110
|
[
"quantlib",
"swaps",
"yield-curve",
"bootstrapping",
"ois"
] |
75625
|
2
| null |
75621
|
5
| null |
Because ROIC is not part of of the official accounting standards such as US-GAAP or IFRS, its definition may vary, also depending a bit on how the party defining it is planning to use it and the availability of the data to that party.
The two approaches you have in your question would give the same result when the definitions are made and used correctly and consistently. From what I can see, some of the definitions and examples in the articles you have referred to are plain wrong or misleading; for example, there should be no need to refer to any of the cash flow statement items when calculating the invested capital.
A company's balance sheet can be categorized as follows when calculating the invested capital for ROIC:
```
Current Assets
Cash and Equivalents
Other Current Assets
Fixed Assets
Financial Investments
Other Fixed Assets
Total Assets
Current Liabilities
Financial and Leasing (FL) Debt
Other Current Liabilities
Long Term Liabilities
Financial and Leasing (FL) Debt
Other Long Term Liabilities
Shareholders' Equity
Total Liabilities and Equity
```
Given that
```
Total Assets = Total Liabilities and Equity
Current Assets + Fixed Assets = Current Liabilities + LT Liabilities
+ Shareholders' Equity
Shareholders' Equity = Current Assets + Fixed Assets
- (Current Liabilities + LT Liabilities),
```
we can show that the two approaches would yield the same result for the invested capital (IC) as follows:
```
IC = Shareholders' Equity
+ Total FL Debt
- Cash and Financial Investments
= Shareholders' Equity
+ (Current FL Debt + LT FL Debt)
- (Cash and Equivalents + Financial Investments)
= Current Assets + Fixed Assets
- (Current Liabilities + LT Liabilities)
+ Current FL Debt + LT FL Debt
- (Cash and Equivalents + Financial Investments)
= Current Assets + Fixed Assets
- (Current Liabilities + LT FL Debt + Other LT Liabilities)
+ Current FL Debt + LT FL Debt
- (Cash and Equivalents + Financial Investments)
= Current Assets - Cash and Equivalents
+ Fixed Assets - Financial Investments
- Current Liabilities + Current FL Debt
- Other LT Liabilities
= Other Current Assets
+ Other Fixed Assets
- Other Current Liabilities
- Other LT Liabilities
= Other Current Assets - Other Current Liabilities
+ Other Fixed Assets - Other LT Liabilities
= Net Working Capital + Other Fixed Assets
```
where
```
Net Working Capital = Other Current Assets - Other Current Liabilities
```
and assuming `Other LT Liabilities = 0`. If `Other LT Liabilities` is not zero, you have to use the formula taking it into account.
There might be variations in the definition of either approach but either approach would yield the same result with the other as long as the terms are defined carefully and consistently in both approaches whenever a change is made.
If you wish to know more on ROIC and how to better measure and interpret it, one of the articles I would suggest reading would be Damodaran's "[Return on Capital (ROC), Return on Invested Capital (ROIC) and Return on Equity (ROE): Measurement and Implications](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1105499)" article from 2007.
| null |
CC BY-SA 4.0
| null |
2023-05-23T13:05:08.720
|
2023-05-25T11:20:49.910
|
2023-05-25T11:20:49.910
|
47484
|
47484
| null |
75628
|
2
| null |
51445
|
0
| null |
One can calculate the annual ROIC for a company based on the above data as I describe below; there is no need to take misleading shortcuts such as using net income instead of operating income or EBIT or using total liabilities instead of total debt like in the [accepted answer](https://quant.stackexchange.com/a/51447/47484) to your question as of yet. Also note that the ROIC of a company at a given time depends on how the [invested capital](https://quant.stackexchange.com/a/75625/47484) as well as the return is exactly defined and the results may vary significantly accordingly as I also demonstrate below.
For the sake of simplicity, I assume the end-2019 numbers are used when calculating the invested capital, but they can easily be swapped with the annual averages. The figures related with the income statement are the quarterly totals.
```
ROIC = Operating Income * (1 - Tax Rate) / Invested Capital
= EBIT * (1 - Tax Rate) / (Equity + Debt - Cash)
= EBIT * (1 - Tax Expense / EBT) / (Equity + Debt - Cash)
= 'ebit' * (1 - 'incomeTaxExpense' / 'incomeBeforeTax')
/ (totalStockholderEquity + 'shortLongTermDebt' + `longTermDebt` - 'cash' - 'shortTermInvestments' - 'longTermInvestments')
= 66153000000 * (1 - 10222000000 / 67749000000)
/ (89531000000 + 10224000000 + 93078000000 - 39771000000 - 67391000000 - 99899000000)
= -3.94797618832281
```
where EBT stands for earnings before taxes.
At first, it may look like there is something wrong with the calculation or data because the result is negative (-395%) but there is not.
The reason the number comes out to be negative because Apple had immense amount of cash & securities (US\$207 billion) on its balance sheet at the end of 2019, even more than its equity and debt combined (US\$193 billion) which is uncommon among the companies. This leads to its invested capital coming out as negative at the end of 2019 as well as its ROIC despite its being a highly profitable company. This also implies Apple could afford to and should pay out more dividends or buy more stock back because it does not need so much liquid assets to operate.
On the other hand, one may prefer excluding the short and long term investments figures from the invested capital calculation with Apple. This is because the company has been carrying large amounts of liquid assets on its balance sheet for a number of years and the management seem intent on continuing to carry them, making them kind of Apple's standard operating procedure.
In that case, the formula would be revised as follows and the year-end 2019 ROIC for Apple would be 36.7% as shown below:
```
ROIC = 'ebit' * (1 - 'incomeTaxExpense' / 'incomeBeforeTax')
/ (totalStockholderEquity + 'shortLongTermDebt' + `longTermDebt` - 'cash')
= 66153000000 * (1 - 10222000000 / 67749000000)
/ (89531000000 + 10224000000 + 93078000000 - 39771000000)
= 0.3669872679532277
```
This is perhaps more inline with what is reported by other sources but I think it understates the actual return Apple generates on its capital and hides the inefficiencies in its capital structure.
| null |
CC BY-SA 4.0
| null |
2023-05-23T16:53:06.533
|
2023-05-28T13:34:03.563
|
2023-05-28T13:34:03.563
|
47484
|
47484
| null |
75629
|
1
| null | null |
0
|
25
|
Background: I am currently implementing a correlated Monte Carlo simulation model using Cholesky decomposition to create the sampling distribution.
Question: What is the difference between creating and sampling an aggregate distribution of correlated assets and simply creating a univariate simulation based on the portfolio level expected return and volatility, as is seen in MVO?
Sorry if the questions been answered already, I'm relatively new to this stack exchange and couldn't find anything specifically answering this question
|
Aggregate Portfolio Simulation vs. Underlying Assets
|
CC BY-SA 4.0
| null |
2023-05-23T17:27:36.073
|
2023-05-23T17:27:36.073
| null | null |
67527
|
[
"stochastic-processes",
"multivariate"
] |
75631
|
1
| null | null |
0
|
50
|
I have experimented a bit with the SABR model. In particular I have calibrated it on EURUSD. Obviously, as SABR is a 3 parameter model, it will not fit the 5 market quotes consisting of ATM, 25 delta butterfly, 10 delta butterfly, 25 delta risk reversal and 10 delta risk reversal perfectly. If I ignore the 10 deltas in the calibration, then the 10 delta butterfly I get in the fitted model tends to be too large. How do people in practice deal with this? Are there modifications of SABR, that would not be too hard to implement, that allow better fits for 5 point smiles? I assume this would only be possible by adding parameters to the model. I have heard that banks tend to use modifications of SABR for pricing Vanilla options, so I would be curious what can be done here.
|
SABR: how to deal with the wings?
|
CC BY-SA 4.0
| null |
2023-05-23T19:27:26.127
|
2023-05-23T19:27:26.127
| null | null |
42914
|
[
"options",
"volatility",
"sabr"
] |
75632
|
1
| null | null |
1
|
42
|
We have a regular self-financing portfolio $W_t$:
$$dW_t = n_t dS_t + (W_t − n_t S_t) r dt$$
Where $W_t$ is total wealth, $n_t$ is amount of stock, $S_t$ is stock price, $r$ is the risk-free rate.
And the strategy is to keep the stock exposure φ constant:
$φ = n_t S_t/W_t$
Currently I'm having trouble while trying to develop a formula for $n_t$, using $dS_t$ and $dt$.
|
How to express the process of number of stock (nt) in a portfolio using ito's lemma
|
CC BY-SA 4.0
| null |
2023-05-23T23:04:34.250
|
2023-05-24T10:33:32.037
|
2023-05-24T10:33:32.037
|
16148
|
67212
|
[
"time-series",
"portfolio-optimization",
"portfolio",
"itos-lemma"
] |
75633
|
1
| null | null |
6
|
436
|
I have recently struggled in interviews, for two quantitative trading positions, by producing weak answers to effectively the same (fairly basic) question. I would like to understand, from a quant perspective, what I am missing about multicollinearity.
The question assumes you have a large portfolio of assets (say n=1000 stocks). As I recall, you prepare a covariance matrix (presumably of the price returns). The implication is that many of these returns are correlated. The question basically is, 'what is the problem with this, and how do you solve it?'.
Let $X\in \mathbb{R}^{m\times n}$ represent the matrix formed by concatenating vectors of each stock's returns observed over $m$ timesteps.
- My answer to 'What is the problem?':
If the returns are correlated, then there is some 'redundancy' in the matrix $X$ (in the extreme case, where a series of returns is identical to another, the matrix is underdetermined).
I think that the implication is X is our matrix of features, and we are dealing with a linear regression model. Hence we are worried about the impact of inverting matrix $X^\top X$. If we have perfect multicollinearity, then this cannot be inverted; if we just have some multicollinearity, we will fit poorly, giving large errors/instability in our estimates for $\beta$.
- My answer to 'How do you solve it?':
Regularisation; the model has 'too many' features, and we should prioritise the more informative ones. L1 regularisation, in particular, allows for us to penalise solutions with many features and simplify the model (whilst retaining interpretability), so we could use a LASSO regression instead. L2 regularisation could also be used, but this doesn't, in general, reduce the number of features.
Unfortunately, I don't think these answers are textbook, so I would love some clarifications:
- Is this even a question about model fitting? Or is it really about variance-covariance matrices, portfolio risk, and/or CAPM-style financial management?
- An interviewer suggested using PCA instead of regularisation. I am not sure why that would be superior, since the principal components do not map to the original stocks you had in your portfolio)
- Does this apply to other models, which don't involve inverting $X^\top X$, or just linear regression?
|
What is the textbook answer to dealing with multicollinearity?
|
CC BY-SA 4.0
| null |
2023-05-24T00:10:26.223
|
2023-05-24T12:08:09.793
|
2023-05-24T12:08:09.793
|
26556
|
29845
|
[
"regression",
"factor-models",
"pca"
] |
75634
|
1
| null | null |
0
|
28
|
In the context of pair trading, I’m trying to regress a VEC model on cointegrated pairs (and also a GARCH model on the residual of that VEC model).I would like to generate random réalisations of each pair, each realization having the same dynamic as its “real life” counterpart. What I did initially is that I generate random innovations first using the GARCH parameters regressed on the pair, then I use the VEC equation with the regressed parameters of the pair as well to generate the final random simulation of prices of the 2 assets.
However the problem is that doing this way leads to negative price, and a cond. variance that is the same scale when the price is close to 0 as when the price is far from 0. In the real world, prices are always positive and the cond variance decrease logarithmically when the price drops close to 0.
I’ve tried to perform the same entire process, but instead of regressing the VEC and the GARCH on prices, I regress on log prices. Then, I generate random simulations the same way as before, but I have a random sim of log prices, in log$. Only at the end of the whole process, I put the random log-price to the exponentential and I get my real prices.
The problem of negative generated prices is fixed, and the variance of the generated prices look more like real cond variance when the price is close to 0.
Some pairs give nice results, looking like much more the shape of the real pair than before the log. The scale of the final random prices are aligned with the scale of price of the asset of the pair on which the VEC and GARCH model are regressed
However, some other pairs however show very weird result, with random prices stuck to close 0 or completely out of scale compared to the real prices of their “real” counterpart
Is the procedure that I described something that can be mathematically ok ? Is it possible to apply the VEC and GARCH models on log prices and log residual like that ?
|
VEC model on log prices, for random simulation?
|
CC BY-SA 4.0
| null |
2023-05-24T02:40:33.300
|
2023-05-24T02:40:33.300
| null | null |
63143
|
[
"garch",
"simulations"
] |
75635
|
2
| null |
75633
|
8
| null |
As one of the interviewers suggested, the expected answer starts with PCA and SVD.
Before detailing it, let's take a paragraph about the way you seem to "misunderstand" the problem: suggesting LASSO or Ridge is out of scope.
Indeed these techniques are based on the penalisation of a loss function and in this question: Where is the loss function you plan to penalise?
I would be the interviewer, such an answer would frighten me more than the candidate not proposing PCA.
Nevertheless, you get right the fact that this collinearity makes the inversion of $X^T X$ (in $N^2$) impossible because it is not full rank.
Not being full rank means that you have to operate in the orthogonal of its Kernel, and the way to identify the kernel is to diagonalise the matrix and to work in the orthogonal of its kernel.
What does it mean? You get the diagonal version of $X^T X=P\Delta P^{-1}$, and keep in mind that $P^{-1}=P^T$. Because the returns of $K$ stock are collinear, you should have $K-1$ zeros in the eigenvalues of $\Delta$, that are its diagonal values.
Look at the operation of multiplying $X^T X$ by a vector $v$ (whatever it is):
$$X^T X\cdot v=P\Delta P^T \cdot v = P\cdot\big(\Delta(vP)^T\big).$$
To "work in the orthogonal of the kernel" means that when the work is "rotated" by $P$, the last $k-1$ components of your vector $v$ face zeros. They correspond to the last $K-1$ components of $P$.
This means that you can safely remove these coordinates: You can invert your matrix, if needed, in the space spanned by the $N-K+1$ first components of the PCA.
Numerically, since $X$ is in general rectangular with far more rows than columns, it is good to use a [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) (SVD) decomposition. It prevents you from inverting a $N$ by $N$ matrix. It directly deals with the rectangular matrix.
In practice, it is not that easy because you find a lot of very small eigenvalues: are they zeros or not? is a complicated question.
My advice is to get this Python code on [scickit-learn](https://scikit-learn.org/stable/auto_examples/applications/plot_stock_market.html), to keep only the first part and to try (last time I checked it succeeded to get returns of stocks from yahoo finance).
They are different approaches to deal with that: the first is to do some econometrics to identify the stocks that are collinear and to replace them with an "equivalent portfolio" (or just keep one of them), that is equivalent to position your problem in the orthogonal of the collinear returns.
The second is to rely on [Random Matrix Theory](https://en.wikipedia.org/wiki/Random_matrix) that will tell you how to "[shrink](https://www.sciencedirect.com/science/article/pii/S0047259X21000749)" the eigenvalues of the $X^T X$ Matrix.
A last remark about your LASSO proposal, it is indeed far from stupid from a portfolio construction perspective. You ca have a look at Bruder, Benjamin, Nicolas Gaussel, Jean-Charles Richard, and Thierry Roncalli. "[Regularization of portfolio allocation](http://www.thierry-roncalli.com/download/Portfolio_Regularization.pdf)." Available at SSRN 2767358 (2013). It very clearly explains how most of the portfolio construction penalisations make sense.
Nevertheless, it is not the answer that is expected first, because it opens the door to sophisticated questions about portfolio construction. Especially in an interview, but also in practice, you should start by setting a baseline model, before trying something more complicated.
| null |
CC BY-SA 4.0
| null |
2023-05-24T03:02:09.943
|
2023-05-24T12:05:48.933
|
2023-05-24T12:05:48.933
|
26556
|
2299
| null |
75636
|
2
| null |
51209
|
0
| null |
I didn't read in detail above but I don't think I actually saw the correct answer above. If you have an annual rate but monthly cashflows then your discount factor is 1 divided by (1+annual rate)^(1/12) or put another way (1+annual rate)^-(1/12).
You've diving the rate itself by 12 in many versions above and that's part of the problem. Simple way to check if what I told you makes sense.
If you put $1 in the bank for a year at a 10% rate at the end of the year you now have $1.10...assuming the bank waits until the end to pay you.
Now assume the bank pays you on a compounded basis monthly. At the end of month each month you are paid (1+annual rate)^(1/12)...do that 12 times and the sum of your exponents become 1....or you can do it the long way and see what u get each month...once again at year end u now have $1.10.
So the monthly accretion and annual accretion now match.
| null |
CC BY-SA 4.0
| null |
2023-05-24T03:46:18.727
|
2023-05-24T03:46:18.727
| null | null |
67532
| null |
75638
|
2
| null |
75604
|
2
| null |
When trading derivatives via "voice" sometimes you got asked if the position is outright. The intention behind this is to understand if your trade idea and your request is part of a larger construction.
Simply speaking: If you ask me for a bid/ask for X I'll might ask you if this trade is "outright".
Your reaction could be "no i'd like to trade X vs. Y as a pair trade"
Then I would price you both instruments at the same time.
A different case could be if you had a fixed income portfolio (consisting from swaps and bonds) and the risk manager asks you "why do you have swap X open?". Your answer could be "This is an outright trade". Therefore the RM knows that this position is not for hedging purposes.
| null |
CC BY-SA 4.0
| null |
2023-05-24T10:30:31.210
|
2023-05-24T10:30:31.210
| null | null |
26899
| null |
75640
|
2
| null |
28156
|
0
| null |
I think you can't just assume the yield of the CTD for a future. You need to also subtract the repo cost since a bond future = fwd = cash bond + repo. So i would say CTD fwd yield minus repo cost. No?
| null |
CC BY-SA 4.0
| null |
2023-05-24T12:26:12.627
|
2023-05-24T12:26:12.627
| null | null |
67540
| null |
75641
|
1
| null | null |
5
|
79
|
I am currently reading "[Modelling single-name and multi-name credit derivatives"](https://rads.stackoverflow.com/amzn/click/com/0470519282) by [Dom O'Kane](https://quant.stackexchange.com/users/12240/dom) but I struggle at one point that should be relatively easy.
Let us consider a Zero Recovery Risky Zero Coupon Bond, that is to say a bond that pays 1 in case there is no default $\tau > T$ (that is the time of default arives after maturity time)
The pricing formula for such a product is thus given by:
$Z(0, T) = E\left[\exp\left(-\int_0^T r(t)dt\right) \cdot \mathbb{1}( \tau > T)\right]$.
We also know that:
$P(\tau > T) = \exp\left(-\int_0^T \lambda(t)dt\right)$.
In order to simplify the writing of $Z(0, T)$, here are the steps that are presented:
$Z(0, T) = E\left[\exp\left(-\int_0^T r(t)dt\right) \cdot \mathbb{1}( \tau > T)\right]$.
Using the law of iterated expectation, we have:
$Z(0, T) = E\left[E\left[\exp\left(-\int_0^T r(t) dt\right) \cdot \mathbb{1}(\tau > T) | {\lambda(t)}_{t \in [0,T]}\right]\right]$.
And as
$E\left[I(\tau > T) | \mathcal{\lambda(t)}_{t \in [0,T]}\right] = \text{P}(\tau > T) = \exp\left(-\int_0^T \lambda(t)dt\right)$.
So one has: $Z(0, T) = \mathbb{E}\left[\exp\left(-\int_0^T (r(t) + \lambda(t)) dt\right)\right]$.
Question 1: Why is the filtration selected the set of $\lambda(t)$?
Question 2: How can we split and write this:
$E\left[\exp\left(-\int_0^T r(t) \, dt\right) \cdot \mathbf{1}_{\tau > T} \,|\, \tau\right] = \exp\left(-\int_0^T r(t) \, dt\right) \cdot E\left[\mathbf{1}_{\tau > T} \,|\, \tau\right]$ if we did not make any assumption on the independence of $r$ and $\lambda$?
|
Law of iterated expectation for the pricing of a Zero Recovery Risky Zero Coupon Bond
|
CC BY-SA 4.0
| null |
2023-05-24T13:02:25.017
|
2023-05-24T16:09:54.723
|
2023-05-24T13:44:14.777
|
36636
|
66451
|
[
"finance-mathematics"
] |
75642
|
1
| null | null |
3
|
191
|
Let us consider a product paying an amount of 1 if default (τ<T that is the time of default arrives before maturity time), and 0 otherwise.
The payoff of such a product would be given by:
$$D(0, T) = E\left(\exp\left(-\int_0^\tau r(t)dt\right) \cdot \mathbb{1}_{\tau \leq T}\right)$$
Knowing that:
$$\text{Probability}(T \leq \tau \leq T + dT) = \lambda(T) \cdot \exp\left(-\int_0^{T} \lambda(t) dt\right) \cdot dT$$
Could someone explain how it is possible to arrive to:
$$D(0,T) = E\left(\int_0^T \lambda(t) \cdot \exp\left(-\int_0^t (r(s) + \lambda(s)) ds\right) dt\right)$$
I tried to use the law of iterated expectation but struggled to find a way out.
|
Fixed Payment at Default - Pricing
|
CC BY-SA 4.0
| null |
2023-05-24T14:04:04.420
|
2023-05-26T13:39:09.757
|
2023-05-24T14:08:10.537
|
26556
|
66451
|
[
"finance-mathematics"
] |
75644
|
2
| null |
75641
|
4
| null |
In the context of credit risk and stochastic calculus, the filtration of a stochastic process, denoted by {F_t}, represents the accumulated information up to time t. This concept allows for the consideration of new information as it is revealed over time. Now, let's get into the specifics of your questions.
Question 1:
The filtration chosen is the set of λ(t) as λ(t) represents the intensity of the default process τ and therefore contains all information about the default risk up to time t. In the case of modelling credit derivatives, knowing the default intensity is crucial in pricing. The filtration {λ(t)} contains all the available information about this default intensity. This means it contains all the necessary information we have up until time t to decide whether or not a default will occur after time T.
Question 2:
Regarding the expectation, there seems to be a misunderstanding here. What you're describing isn't the separation of the integral of r(t) and λ(t), but rather, the application of the law of total expectation (or iterated expectations).
The law of total expectation states that the expected value of a random variable can be calculated by taking the expected value of the conditional expected value of the variable on a smaller sigma algebra. This allows us to separate the probability of default event (τ>T) from the discounting factor. It's not about the independence between r and λ, but the application of the law of total expectation.
In the equation E[exp(−∫T0r(t)dt)⋅1(τ>T)|τ]=exp(−∫T0r(t)dt)⋅E[1(τ>T)|τ], the independence between r and τ is assumed. Therefore, we can treat r as deterministic when considering the expectation given τ. This allows us to move exp(−∫T0r(t)dt) out of the expectation operator, as it doesn't involve τ.
Do note that this sort of simplification can often be found in credit risk modelling as we usually consider the short rate r and the default intensity λ as independent, which simplifies the calculations while still capturing the core features of the credit derivative's behavior.
| null |
CC BY-SA 4.0
| null |
2023-05-24T16:09:54.723
|
2023-05-24T16:09:54.723
| null | null |
42110
| null |
75645
|
2
| null |
75642
|
3
| null |
The quantity you're trying to derive is the price at time 0 of a defaultable zero coupon bond which pays off 1 if default occurs before the maturity time T (i.e., τ<T), and 0 otherwise.
This is a bit tricky but the key here is to understand that we can represent this price as an integral over the possible default times. The intuition is that you're integrating the payoffs across all possible times of default.
Since default could happen at any time t in [0, T], we write D(0,T) as an integral from 0 to T.
At any given time t, the payoff is the present value of 1 discounted back to time 0 if default occurs, weighted by the probability that default occurs at that time. The discounting term is $$\exp\left(-\int_{0}^{t} r(s)ds\right)$$
, and from your given information, the default probability density function is $$\lambda(t) \cdot \exp\left(-\int_{0}^{t} \lambda(s)ds\right)
$$.
Therefore, we can write D(0,T) as:
$$
D(0,T) = \int_{0}^{T} E[ \exp(-\int_{0}^{t}r(s)ds) \cdot \lambda(t) \cdot \exp(-\int_{0}^{t}\lambda(s)ds) \mid \mathcal{F}_t ] dt
$$
Note that we're using the filtration F_t as it would represent the information available at time t, which would contain both r(s) and λ(s) for all s≤t.
Since r and λ are assumed to be independent, we can treat r as deterministic when taking the conditional expectation:
$$
D(0,T) = \int_{0}^{T} \exp(-\int_{0}^{t}r(s)ds) \cdot E[ \lambda(t) \cdot \exp(-\int_{0}^{t}\lambda(s)ds) \mid \mathcal{F}_t ] dt
$$
Which simplifies to:
$$
D(0,T) = \int_{0}^{T} \lambda(t) \cdot \exp(-\int_{0}^{t} (r(s) + \lambda(s)) ds) dt
$$
In this last expression, the $$\lambda(t) \cdot \exp\left(-\int_{0}^{t} \lambda(s)ds\right)$$
term represents the probability of default at time t and $$\exp\left(-\int_{0}^{t} r(s)ds\right)
$$ is the discount factor. We integrate this product over [0, T] to get the expected discounted payoff of the bond.
It's important to remember that the law of iterated expectations and the assumptions of independence between certain variables are key to simplifying these kind of expressions in the credit risk context.
| null |
CC BY-SA 4.0
| null |
2023-05-24T16:13:26.717
|
2023-05-26T13:39:09.757
|
2023-05-26T13:39:09.757
|
66451
|
42110
| null |
75646
|
1
|
75647
| null |
3
|
79
|
I've been reading about multi-armed bandits and the explore/exploit trade-off that can be solved with dynamic allocation indices such as the Gittins Index Theorem. Could this be applied to when to change investment/trading strategies assuming no cost in switching such strategies? If the fund's risk aversion is an issue, would this be applicable to a risk-neutral actor (say a hedge fund)?
|
Is the Gittins index useful in determining when to change an investment/trading strategy?
|
CC BY-SA 4.0
| null |
2023-05-24T16:43:37.327
|
2023-05-24T18:23:23.427
| null | null |
55044
|
[
"quant-trading-strategies",
"algorithmic-trading",
"mathematics",
"strategy"
] |
75647
|
2
| null |
75646
|
3
| null |
[Gittins](https://en.wikipedia.org/wiki/Gittins_index) is as useful as your ability to forecast returns and uncertainty.
Depending on what you use for 'uncertainty,' you may just be replicating processes that other financial metrics already accomplish.
That said, there is nothing wrong with using Gittins to attempt dynamic, tactical asset allocation. It's a matter of how well your forecasting works. You need to forecast returns, and whatever you use for uncertainty, such as expected volatility, expected drawdowns, some other custom function that quantifies the uncertainty input, etc.
If you have several trading strategies that produce consistent risks and have consistent returns that you are comfortable using to forecast, Gittins could be used to help with the timing of a switch between strategies.
| null |
CC BY-SA 4.0
| null |
2023-05-24T18:23:23.427
|
2023-05-24T18:23:23.427
| null | null |
26556
| null |
75648
|
2
| null |
2679
|
0
| null |
Look at the assets as copies of each other (say commonality $f$) with a bit of noise added to each asset. When you average out the assets you retain $f$ and noise goes to 0. $f$ is the "common part" embedded in each asset - this is what you need to know to pin down at once all the asset returns (to a reasonable extent) and thus this is what PCA abstracts out.
This lends itself to the natural interpretation of "market portfolio" - i.e. that which is embedded roughly in equal measure in all assets.
| null |
CC BY-SA 4.0
| null |
2023-05-24T20:23:15.963
|
2023-05-24T20:23:15.963
| null | null |
57141
| null |
75649
|
1
| null | null |
0
|
54
|
I am trying to implement MonotoneConvex by Hagan/West by extending FittingMethod class. As per this page:
[https://rkapl123.github.io/QLAnnotatedSource/d7/d0d/class_quant_lib_1_1_fitted_bond_discount_curve_1_1_fitting_method.html#details](https://rkapl123.github.io/QLAnnotatedSource/d7/d0d/class_quant_lib_1_1_fitted_bond_discount_curve_1_1_fitting_method.html#details)
I need to override discountFunction() and size(). Is this possible through QuantLib-Python?
|
QuantLib: How to implement custom FittingMethod in Python?
|
CC BY-SA 4.0
| null |
2023-05-24T20:24:49.500
|
2023-05-25T08:17:38.747
| null | null |
67475
|
[
"quantlib"
] |
75650
|
1
| null | null |
1
|
74
|
I have been using the CrossCurrencyBasisSwapRateHelper feature to generate a colateralised discounting curve where the collateral is in a currency different to that of the asset. However, I noticed that this feature has now been deprecated and no longer available although it is still listed under the 'Helpers' section of the online Quantlib help documentation. Is their an alternative ?
|
CrossCurrencyBasisSwapRateHelper feature deprecated
|
CC BY-SA 4.0
| null |
2023-05-24T22:21:56.370
|
2023-05-25T07:53:54.640
| null | null |
58684
|
[
"programming",
"quantlib"
] |
75651
|
2
| null |
75650
|
2
| null |
it looks as if ConstNotionalCrossCurrencyBasisSwapRateHelper has been added as replacement. Just no documentation offered on the at
[https://quantlib-python-docs.readthedocs.io/en/latest/thelpers.html?highlight=helpers](https://quantlib-python-docs.readthedocs.io/en/latest/thelpers.html?highlight=helpers)
| null |
CC BY-SA 4.0
| null |
2023-05-24T23:00:07.200
|
2023-05-24T23:00:07.200
| null | null |
58684
| null |
75652
|
1
| null | null |
3
|
50
|
I am reading Euan's book, ‘positional option trading’ and have a question about risk reversal P/L example. Here is description 'Consider a 1-month risk reversal on a \$100 stock. The 20-delta put (91 strike) has an implied volatility of 40.8% and the 20-delta call has an implied volatility of 23.1%. We sell the put and buy the call because we expect the skew to flatten. Table 8.11 shows the profits we make on the position for various degrees of flattening. However, the expected daily move of a \$100 stock with a volatility of 30% is \$1.50. If the stock drops to \$98.5, the risk reversal loses \$94, and if the stock rallies to \$101.5,the risk reversal will make $16. So, an average daily P/L due to the stock's random fluctuations is \$55. '
I wonder how the author came up with these numbers(ie, \$94 loss, \$16 make, daily P/L due to stock move \$55). I can figure out here risk reversal through selling 20-delta put(compute put strike at 91 with volality=iv of 40.8%) and long 20-delta call(compute call strike at 103 with volatilty=iv 23.1%). I know the P/L(-94, +16, +55) here due to delta/gamma effect, I believe delta is 0.4(0.2+0.2, correct me if I am wrong), tried to use all kinds of ways(including ATM volatility(0.3) and implied volatilities, etc) to compute the P/L due to stock move, but I couldn't get the numbers here. Can anyone help me out? Thanks very much in advance!
Fred
|
question on risk reversal P/L example in Euan Sinclair's book 'positional option trading'
|
CC BY-SA 4.0
| null |
2023-05-25T03:17:32.030
|
2023-05-25T03:25:29.063
|
2023-05-25T03:25:29.063
|
67547
|
67547
|
[
"implied-volatility",
"option-strategies",
"skewness"
] |
75653
|
2
| null |
75649
|
3
| null |
I think no is the short answer. You can't edit Quantlib libraries in Quantlib Python - you can only use them.
Here's my understanding: Quantlib is a library written in C++ with all source files freely available. Once compiled the library can be "accessed" from a scripting language like Python, R ... etc via specially written interface files through a program called SWIG. Not all Quantlib C++ classes are accessible via SWIG - only ones for which an interface file has been created. So using Quantlib Python is just a front end that lets you utilize Quantlib C++. To do what you're suggesting would involve:
- Download the Quantlib C++ source package and compile/install on a local workstation
- Download and install Quantlib SWIG which creates a Python module that interfaces with that particular C++ installation
- Once you have everything working, you can change the source for any C++ file to your own customization, recompile the code and reinstall.
- To get these changes working with Python: depending on what you've changed/added in C++ you may need to either edit an existing/create a new interface file, wrap them and reinstall Quantlib-SWIG.
I don't know the details of your particular issue: but I think you would need to create a new class called MonotoneConvex and tell the FittingMethod interface file about this new class. Alternatively, you can raise a community GitHub request to add this new method to the master library (or, since Quantlib is community driven project and you know what you're doing, contribute it yourself!)
Btw if, like myself, you sometimes just want to get things to work quickly so you can play around with ideas and you're not a C++ whiz-kid: you can do steps 1,2 and 3 above and then just hack the C++ code of one of the existing method classes to the math you want executed. Then recompile the C++ library (this way you don't even have to change the SWIG installation). And things will run in your local Python module. This may work depending on what dependencies your hack breaks...etc (you'll find this out when you try and recompile the altered C++ code).
| null |
CC BY-SA 4.0
| null |
2023-05-25T06:11:58.903
|
2023-05-25T06:24:59.470
|
2023-05-25T06:24:59.470
|
35980
|
35980
| null |
75654
|
2
| null |
75650
|
1
| null |
It seems that the `ConstNotionalCrossCurrencyBasisSwapRateHelper` is indeed a replacement for the deprecated `CrossCurrencyBasisSwapRateHelper` in QuantLib.
The `ConstNotionalCrossCurrencyBasisSwapRateHelper` class is used to create rate helpers for cross-currency basis swap curves, where the collateral is in a currency different from that of the asset. It allows for a constant notional amount for the basis swaps, as opposed to varying notionals that were supported by the deprecated helper.
To use `ConstNotionalCrossCurrencyBasisSwapRateHelper`, you would typically create an instance of the helper and pass the required parameters such as the settlement days, quote, start and end dates, and the underlying basis swap index. This helper can then be used in the construction of your cross-currency basis swap curve.
Here's a basic example demonstrating the usage of `ConstNotionalCrossCurrencyBasisSwapRateHelper` in Python with QuantLib:
```
import QuantLib as ql
# Set up the required parameters
settlement_days = 2
quote = ql.SimpleQuote(0.01) # Example basis swap rate quote
start_date = ql.Date(25, 5, 2023) # Example start date
end_date = ql.Date(25, 5, 2024) # Example end date
calendar = ql.TARGET() # Example calendar
collateral_currency = ql.EURCurrency() # Example collateral currency
asset_currency = ql.USDCurrency() # Example asset currency
basis_swap_index = ql.Euribor6M() # Example basis swap index
# Create the basis swap rate helper
basis_swap_helper = ql.ConstNotionalCrossCurrencyBasisSwapRateHelper(
settlement_days, quote, start_date, end_date,
calendar, collateral_currency, basis_swap_index, asset_currency
)
# Retrieve the discount curve for collateral currency
collateral_curve = ql.YieldTermStructureHandle(
ql.FlatForward(0, ql.TARGET(), 0.05, ql.Actual360())
) # Example collateral curve
# Construct the discounting curve with the basis swap helper
helpers = [basis_swap_helper]
curve = ql.PiecewiseLogCubicDiscount(settlement_days, calendar, helpers, ql.Actual360())
# Evaluate the curve at a specific date
curve_date = ql.Date(1, 6, 2023) # Example evaluation date
discount_factor = curve.discount(curve_date)
print(f"Discount factor at {curve_date}: {discount_factor:.6f}")
```
| null |
CC BY-SA 4.0
| null |
2023-05-25T07:53:54.640
|
2023-05-25T07:53:54.640
| null | null |
42110
| null |
75656
|
2
| null |
75614
|
2
| null |
When we say that holding an uncollateralized derivative is equivalent to taking a loan from the counterparty, we are referring to the economic exposure and funding implications associated with the derivative contract. The key point is that the derivative contract represents an agreement between two parties to exchange future cash flows based on the underlying asset or reference rate.
In an uncollateralized derivative, there is a credit risk associated with the counterparty's ability to meet its obligations. The party holding the derivative is exposed to potential losses if the counterparty defaults. To mitigate this risk, the holder of the derivative needs to consider the potential funding cost associated with the exposure.
To put it in perspective, consider the following scenario:
- You enter into an uncollateralized derivative contract with
Counterparty A.
- Over the life of the contract, the derivative generates positive
cash flows for you, which you expect to receive from Counterparty A
at various future dates.
- However, as a prudent risk management practice, you need to consider
the possibility that Counterparty A may default or fail to meet its
obligations. This introduces a credit risk.
- To protect yourself against the credit risk, you would need to set
aside or reserve some funds or capital to cover potential losses in
case Counterparty A defaults.
- This setting aside of funds or capital
can be seen as an implicit loan made to Counterparty A. You are
effectively lending them the amount required to cover the potential
losses.
- The funding cost arises because you need to obtain the funds to
cover this potential exposure. You may need to borrow or raise
capital to finance this loan-like exposure.
So, the concept of funding arises from the need to protect against the credit risk associated with uncollateralized derivatives. It is not a direct loan transaction with the counterparty, but rather an implicit funding requirement to cover potential losses.
It's important to note that in collateralized derivatives, the collateral acts as a form of protection against counterparty default risk, reducing the need for explicit funding. The collateral serves as a buffer to cover potential losses, and the holder of the derivative can utilize the collateral to offset their exposure.
| null |
CC BY-SA 4.0
| null |
2023-05-25T08:02:03.603
|
2023-05-25T08:02:03.603
| null | null |
42110
| null |
75657
|
2
| null |
75624
|
2
| null |
By using `ql.MakeVanillaSwap`, you're creating a swap that pays LIBOR vs fixed, not an OIS like the ones you used to bootstrap the curve. If you actually want to use vanilla swaps, you need to use `SwapRateHelper`, not `OISRateHelper`. If you do want to use OIS instead, you'll have to use `OvernightIndexedSwap` to build the swap and retrieve the fair rate.
| null |
CC BY-SA 4.0
| null |
2023-05-25T08:26:54.563
|
2023-05-25T08:26:54.563
| null | null |
308
| null |
75659
|
1
|
75660
| null |
4
|
186
|
I'm currently working on building a yield curve using OIS swaps. However, I'm encountering an issue with the cash flow start date that I'm struggling to understand. Here's a simplified version of my code:
Reproducible example
```
import QuantLib as ql
calculation_date = ql.Date().todaysDate() #When I posted this on Quant exchange the date was 25/5/2023
ql.Settings.instance().evaluationDate = calculation_date
index = ql.OvernightIndex("USD Overnight Index", 2, ql.USDCurrency(), ql.UnitedStates(ql.UnitedStates.Settlement), ql.Actual360())
swaps = {
ql.Period("1W"): 0.05064,
ql.Period("2W"): 0.05067,
ql.Period("3W"): 0.05072,
ql.Period("1M"): 0.051021000000000004,
ql.Period("2M"): 0.051391,
ql.Period("3M"): 0.051745,
ql.Period("4M"): 0.05194,
ql.Period("5M"): 0.051980000000000005,
ql.Period("6M"): 0.051820000000000005,
ql.Period("7M"): 0.051584000000000005,
ql.Period("8M"): 0.05131,
ql.Period("9M"): 0.050924,
ql.Period("10M"): 0.050603999999999996,
ql.Period("11M"): 0.050121,
ql.Period("12M"): 0.049550000000000004,
ql.Period("18M"): 0.04558500000000001,
ql.Period("2Y"): 0.042630999999999995,
ql.Period("3Y"): 0.038952,
ql.Period("4Y"): 0.036976,
ql.Period("5Y"): 0.035919,
ql.Period("6Y"): 0.03535,
ql.Period("7Y"): 0.034998,
ql.Period("8Y"): 0.034808,
ql.Period("9Y"): 0.034738000000000005,
ql.Period("10Y"): 0.034712,
ql.Period("12Y"): 0.034801,
ql.Period("15Y"): 0.034923,
ql.Period("20Y"): 0.034662,
ql.Period("25Y"): 0.03375,
ql.Period("30Y"): 0.032826,
ql.Period("40Y"): 0.030834999999999998,
ql.Period("50Y"): 0.02896
}
rate_helpers = []
for tenor, rate in swaps.items():
helper = ql.OISRateHelper(2, tenor, ql.QuoteHandle(ql.SimpleQuote(rate)), index)
rate_helpers.append(helper)
yts = ql.RelinkableYieldTermStructureHandle()
curve = ql.PiecewiseSplineCubicDiscount(calculation_date, rate_helpers, ql.Actual360())
yts.linkTo(curve)
index = index.clone(yts)
engine = ql.DiscountingSwapEngine(yts)
print("maturity | market | model")
for tenor, rate in swaps.items():
schedule = ql.Schedule(calculation_date,
calculation_date + tenor,
ql.Period('1D'),
ql.UnitedStates(ql.UnitedStates.GovernmentBond),
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Forward,
False)
swap = ql.OvernightIndexedSwap(ql.OvernightIndexedSwap.Payer,
1.0,
schedule,
0.01,
ql.Actual360(),
index)
swap.setPricingEngine(engine)
print(f" {tenor} | {rate:.6f} | {swap.fairRate():.6f}")
```
The issue I'm facing is that the cash flow start date for the swaps is appearing two days before the evaluation date (May 23rd, 2023 and Evaluation date May 25rd, 2023), which doesn't align with my expectations. I have set the evaluation date as the current date in my code, so the cash flow start date should be either on the evaluation date or after it.
>
File "AAA", line 173, in build_curve_v2
print(f" {tenor} | {rate:.6f} | {swap.fairRate():.6f}")
^^^^^^^^^^^^^^^
File "AAA", line 12, in <module>
bootstrap.Curve_Build()
RuntimeError: 2nd leg: Missing USD Overnight IndexSN Actual/360 fixing for May 23rd, 2023
I suspect there might be something wrong with my bootstrapping method or the way I'm constructing the yield curve. Any guidance or suggestions on how to resolve this issue would be greatly appreciated.
---
A Possible explanation :
When an `OvernightIndexedSwap` is created, the floating leg's schedule is generated with its first date being T+2 days before the start of the swap, where T is the start of the swap. In my case, if the swap starts on the evaluation date of May 25th, 2023, the floating leg's schedule will begin on May 23rd, 2023. As a result, a fixing for the overnight rate on May 23rd, 2023, is required.
|
Yield curve bootstrapping not producing expected cash flow start date
|
CC BY-SA 4.0
| null |
2023-05-25T09:34:33.497
|
2023-05-25T10:46:31.373
|
2023-05-25T10:23:59.373
|
42110
|
42110
|
[
"programming",
"fixed-income",
"quantlib",
"swaps",
"ois"
] |
75660
|
2
| null |
75659
|
4
| null |
There are a few consistency problems.
One is that you're passing 2 fixing days to the `ql.OvernightIndex` constructor. This way the schedule starts correctly on the calculation date, but the index will look for fixings two days before that. I'd use 0 days instead.
Another is that, when you're creating the helpers, you're passing 2 settlement days. This means that their schedule will start 2 days after the calculation date. For consistency, since you want to reproduce these rates, you need 0 here as well (or you might keep the 2, but in this case the swap schedules will need to start two business days after the calculation date too.)
Finally, when you create the swaps at the end, you're passing a tenor of 1D to the schedule. That would mean payments each day. Instead, the rate is compounded each day but the payments are annual so you need to pass 1Y instead.
With these changes, I'm getting the same rates.
| null |
CC BY-SA 4.0
| null |
2023-05-25T10:12:22.900
|
2023-05-25T10:46:31.373
|
2023-05-25T10:46:31.373
|
308
|
308
| null |
75663
|
1
| null | null |
1
|
64
|
Disclaimer: I understand this is a basic question that gets
addressed in most 101 textbooks. Yet I have reviewed many of them
not finding a satisfactory answer. Please bear with my
ignorance.
Suppose a forward contract enforces the parties to exchange an
asset with price $K$ at the delivery date $t=T$, and suppose the
spot price of that asset at $t=T$ is $L$ (so both $K$ and $L$ are constant). Then the possessor of
the forward contract at $t=T$ is forced to buy the asset at price
$K$, and the possessor can also sell the asset immediately to
obtain $L$. That means, the possessor at time $t=T$ can
immediately earn $L-K$. Therefore anyone who are to buy that
future contract at $t=T$ must obviously pay $L-K$. This means
that the forward contract at $t=T$ should be of price $L-K$.
However, all textbooks and resources I've seen claim that the
forward contract at $t=T$ should be of price $L$ instead of
$L-K$.
Why so? What is wrong in my argument?
---
Another attempt I made to understand this point is by reading
[1]. In section 2.3, it provides a more detailed argument:
>
As the delivery period for a futures contract is approached,
the futures price converges to the spot price of the underlying
asset. When the delivery period is reached, the futures price
equals—or is very close to—the spot price. To see why this is
so, we first suppose that the futures price is above the spot
price during the delivery period. Traders then have a clear
arbitrage opportunity:
Sell (i.e., short) a futures contract
Buy the asset
Make delivery.
But when I carry the cash flow out, I can't make that balance.
First, selling a future contract yields a flow `(-1 * future) + (+1 * future-price-at-time-T)`. Second, buying the asset yields a
flow `(+1 * asset) + (-1 * spot-price-at-time-T)`. Third, making
delivery yields a flow `(-1 * asset) + K`. The net balance of
three flows is
```
+ 1 * future-price-at-time-T
- 1 * spot-price-at-time-T
- 1 * future
+ K
```
I can't tell why if `+ 1 * future-price-at-time-T - 1 * spot-price-at-time-T > 0` then there is an arbitrage opportunity.
---
Thanks for your patience and sharing.
### Reference
- [1] Options, Futures, and Other Derivatives by John C. Hull
|
Why does forward price equal spot price at delivery?
|
CC BY-SA 4.0
| null |
2023-05-25T13:38:10.107
|
2023-05-25T14:42:54.217
|
2023-05-25T14:42:54.217
|
50125
|
50125
|
[
"futures",
"forward"
] |
75664
|
1
| null | null |
0
|
64
|
In his book (chapters 9.5 to 9.7), Peter Austing argues that barrier options are insensitive to the details of the stochastic volatility model used in a LSV model, except for the level of vol of vol. In particular, he claims that there is no point in including a spot-volatility correlation in the stochastic volatility part of the LSV model because the effect is already taken into account in the local volatility part of the LSV model.
Through testing, I find that correlation has on the contrary a large impact on the price of DIPs (down and in puts) and UOCs (up and out calls), and of course on DOPs and UICs through the In-Out parity. I even find that the vol of vol sensitivity actually changes sign for some value of correlation. For example a DIP price goes up with increasing vol of vol when correlation is close to -1. However it goes down with with increasing vol of vol when correlation is higher (already for negative values of correlation). Conversely, the UOC price goes up with increasing vol of vol except when correlation is close to +1.
The DIP case is particularly interesting because it's the most popular barrier option type on equity derivatives (by far) and the required correlation for calibrating a SV model to an equity vol surface is usually close to -1.
My LSV implementation is in Monte Carlo with the particles method by Guyon and Henry-Labordere. The SV model is either lambda-SABR as suggested by Austing or the one in "Calibrating and pricing with embedded local volatility models" (Ren Madan Qian 2007), it does not make a difference. Vanillas of all strikes and maturities are repriced very accurately for every (correlation, vol of vol) pair tested (MAE ~ 0.2bp, worst fit ~ 1bp).
Does anyone have the same experience?
|
Barrier options in LSV (local stochastic volatility) / Austing's Smile pricing explained
|
CC BY-SA 4.0
| null |
2023-05-25T15:00:23.843
|
2023-05-25T15:00:23.843
| null | null |
67559
|
[
"stochastic-volatility",
"local-volatility",
"barrier"
] |
75665
|
2
| null |
46436
|
0
| null |
Check out Algo Challenge Association ([https://algochallenge.org/](https://algochallenge.org/)).
They organize global algo-trading competitions every year. Each time the theme will have different underlying market (eg. FX, Equity, Crypto, etc).
The quant platform ([https://algogene.com](https://algogene.com)) used in their contests is really sophisticated and powerful.
| null |
CC BY-SA 4.0
| null |
2023-05-25T16:08:50.333
|
2023-05-25T16:10:10.187
|
2023-05-25T16:10:10.187
|
67560
|
67560
| null |
75666
|
1
|
75668
| null |
1
|
63
|
I'm looking for an easy method to approximate the probability of the forward swap rate that is implied by the swpation market. One possibility would be to fit a certain model, e.g. SABR, and extract the risk neutral density.
On the other hand I know from the equity case that $N(d_1)$, the delta, is used an approximation that the underlying ends in the money. Is there a similar approximation in the swaption case? I.e. could we use normal vol (bachelier model), and use $N(d_1)$, where $d_1 = \frac{F-K}{\sigma \sqrt{T}}$ for forward swap rate $F$, strike $K$, and implied normal vol $\sigma$ and time to maturity $T$. Or is there any other approximation used frequently?
|
Is $N(d_1)$ a good approximation that a swap enters in the money?
|
CC BY-SA 4.0
| null |
2023-05-25T18:39:35.957
|
2023-05-26T00:39:02.777
| null | null |
44729
|
[
"swaption"
] |
75667
|
1
| null | null |
1
|
43
|
We know that a market is called complete if it is possible to replicate any future payoff trading in its securities. Is there an exhaustive list of requirements that when satisfied imply market completeness? Is perfect competitiveness one of those?
|
Conditions for market completeness
|
CC BY-SA 4.0
| null |
2023-05-25T20:04:02.053
|
2023-05-31T07:47:01.027
| null | null |
62629
|
[
"option-pricing",
"hedging",
"financial-markets"
] |
75668
|
2
| null |
75666
|
4
| null |
It is best to , as you say, extract the risk neutral density from a model that fits the market skew , such as sabr. Then you can compute the probability directly. The problem with N(d) is that you are assuming constant volatility , either constant normalized volatility in the Bachelier model or constant lognormal volatility in the BS model. In a market where there is significant skew , this will give you the wrong answer.
| null |
CC BY-SA 4.0
| null |
2023-05-26T00:39:02.777
|
2023-05-26T00:39:02.777
| null | null |
18388
| null |
75669
|
1
| null | null |
0
|
53
|
This question is a follow-up of this question [Fixed Payment at Default - Pricing](https://quant.stackexchange.com/questions/75642/fixed-payment-at-default-pricing) for more clarity.
Starting from the following expression of payoff:
$$
D(0,T) = E(\exp(-\int_{0}^{\tau} r(t)dt) \cdot \mathbb{1}_{\{\tau \leq T\}})
$$
and knowing that
$$
\text{Probability}(T \leq \tau \leq T+dT) = \lambda(T) \cdot \exp(-\int_{0}^{T} \lambda(t)dt) \cdot dT
$$
we aim to arrive to:
$$
D(0,T) = E(\int_{0}^{T} \lambda(t) \cdot \exp(-\int_{0}^{t} (r(s)+\lambda(s))ds)dt)
$$
The idea as stated is to integrate the payoff over all possible times of default, thus form 0 to T.
We thus have:
$$
D(0,T) = E(\int_{0}^{T} \exp(-\int_{0}^{t} r(s)ds) \cdot \mathbb{1}_{\{t \leq T\}}dt \mid \mathcal{F}_0)
$$
But I am struggling from this step to develop the calculus. Would the law of iterated expectation be needed?
|
Follow-up Fixed Payment at Default - Pricing
|
CC BY-SA 4.0
| null |
2023-05-26T10:34:00.407
|
2023-05-26T12:37:27.413
|
2023-05-26T12:37:27.413
|
66451
|
66451
|
[
"finance-mathematics"
] |
75671
|
1
|
75681
| null |
0
|
80
|
I have been attempting to bootstrap zero rates using `quantlib`, but I am perplexed by the significant discrepancies between my calculated zero rates and those obtained from Bloomberg's bootstrapping process. I would greatly appreciate any insights or suggestions regarding potential reasons for this mismatch. Below is the reproducible example :
```
import QuantLib as ql
calculation_date = ql.Date().todaysDate() #When I posted this on Quant exchange the date was 26/5/2023
ql.Settings.instance().evaluationDate = calculation_date
index = ql.OvernightIndex("USD Overnight Index", 0, ql.USDCurrency(), ql.UnitedStates(ql.UnitedStates.Settlement), ql.Actual360())
swaps = {
ql.Period("1W"): 0.05064,
ql.Period("2W"): 0.05067,
ql.Period("3W"): 0.05072,
ql.Period("1M"): 0.051021000000000004,
ql.Period("2M"): 0.051391,
ql.Period("3M"): 0.051745,
ql.Period("4M"): 0.05194,
ql.Period("5M"): 0.051980000000000005,
ql.Period("6M"): 0.051820000000000005,
ql.Period("7M"): 0.051584000000000005,
ql.Period("8M"): 0.05131,
ql.Period("9M"): 0.050924,
ql.Period("10M"): 0.050603999999999996,
ql.Period("11M"): 0.050121,
ql.Period("12M"): 0.049550000000000004,
ql.Period("18M"): 0.04558500000000001,
ql.Period("2Y"): 0.042630999999999995,
ql.Period("3Y"): 0.038952,
ql.Period("4Y"): 0.036976,
ql.Period("5Y"): 0.035919,
ql.Period("6Y"): 0.03535,
ql.Period("7Y"): 0.034998,
ql.Period("8Y"): 0.034808,
ql.Period("9Y"): 0.034738000000000005,
ql.Period("10Y"): 0.034712,
ql.Period("12Y"): 0.034801,
ql.Period("15Y"): 0.034923,
ql.Period("20Y"): 0.034662,
ql.Period("25Y"): 0.03375,
ql.Period("30Y"): 0.032826,
ql.Period("40Y"): 0.030834999999999998,
ql.Period("50Y"): 0.02896
}
rate_helpers = []
for tenor, rate in swaps.items():
helper = ql.OISRateHelper(0, tenor, ql.QuoteHandle(ql.SimpleQuote(rate)), index)
rate_helpers.append(helper)
yts = ql.RelinkableYieldTermStructureHandle()
curve = ql.PiecewiseFlatForward(calculation_date, rate_helpers, ql.Actual360())
yts.linkTo(curve)
index = index.clone(yts)
engine = ql.DiscountingSwapEngine(yts)
print("maturity | market | model | zero rate | discount factor")
for tenor, rate in swaps.items():
schedule = ql.Schedule(calculation_date,
calculation_date + tenor,
ql.Period('1Y'),
ql.UnitedStates(ql.UnitedStates.GovernmentBond),
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Forward,
False)
swap = ql.OvernightIndexedSwap(ql.OvernightIndexedSwap.Payer,
1.0,
schedule,
0.01,
ql.Actual360(),
index)
swap.setPricingEngine(engine)
maturity_date = calculation_date + tenor
zero_rate = curve.zeroRate(maturity_date, ql.Actual360() , ql.Compounded).rate()
discount_factor = curve.discount(maturity_date)
print(f" {tenor} | {rate*100:.6f} | {swap.fairRate()*100:.6f} | {zero_rate*100:.6f} | {discount_factor:.6f}")
```
The output of this code is :
```
maturity | market | model | zero rate | discount factor
1W | 5.064000 | 5.064000 | 5.191792 | 0.999016
2W | 5.067000 | 5.067000 | 5.192324 | 0.998033
3W | 5.072000 | 5.072000 | 5.194951 | 0.997050
1M | 5.102100 | 5.102100 | 5.222740 | 0.995626
```
However, when referring to the output displayed on Bloomberg, I find it important to mention that the information presented, which I believe to be accurate, is as follows:
[](https://i.stack.imgur.com/E9r0J.png)
I am inclined to believe that the issue at hand could potentially be attributed to parameters.
I would greatly appreciate any suggestions, insights that could help me understand and resolve the disparities between my calculated zero rates and the accurate rates shown on Bloomberg. Thank you for your valuable assistance!
Below is the information I have from bloomberg regarding conventions
[](https://i.stack.imgur.com/bgho3.png)
[](https://i.stack.imgur.com/8ihFp.png)
[](https://i.stack.imgur.com/9U67Q.png)
|
Discrepancy between Bootstraped Zero Rates: Gaps between Bloomberg and My Calculated Zero Rates
|
CC BY-SA 4.0
| null |
2023-05-26T14:04:54.427
|
2023-05-27T09:00:52.460
| null | null |
42110
|
[
"programming",
"fixed-income",
"quantlib",
"swaps",
"ois"
] |
75672
|
1
| null | null |
2
|
53
|
I am currently working on pricing a Zero Coupon Inflation Swap using Quantlib in Python. During my analysis, I have observed that when the start date and end date of the swap coincide exactly with X years from the valuation date (e.g., a maturity date of March 31, 2041), and I'm pricing the swap on the same date (e.g., March 31, 2023), the price obtained from Quantlib in Python closely matches the price provided by Bloomberg.
However, a notable price discrepancy arises when the time to maturity is not precisely X years from the current date. I suspect that this difference is influenced by seasonality. It is worth noting that Bloomberg's prices (when not using seasonality) are close to the Quantlib prices.
While Quantlib does offer the option to implement seasonality, the results obtained using this approach are not meeting my expectations.
I have obtained seasonality adjustment parameters from Bloomberg (SWIL in BB) and using them now in Quantlib, although I'm not certain if my approach is correct.
Here are the adjustment values for each month taken from Bloomberg:
```
Month Adjustment
Jan: -0.004238
Feb: 0.004928
Mar: 0.011557
Apr: 0.00558
May: 0.003235
Jun: 0.003415
Jul: -0.002354
Aug: 0.001692
Sep: 0.004916
Oct: 0.005826
Nov: -0.001049
Dec: 0.001429
```
In the code snippet below, I have included the necessary parameter setup and the implementation of the seasonality adjustment.
Does anyone know how to correctly use the seasonality adjustment in Quantlib? I'm having trouble achieving results that match Bloomberg's prices. Any insights would be greatly appreciated.
Code should work as I included all parameters used to create all the curves.
```
import QuantLib as ql
from QuantLib import *
import pandas as pd
start_date = ql.Date(17,2,2021)
# start_date = ql.Date(31,3,2021)
calc_date = ql.Date(31,3,2023)
calc_date_db = pd.datetime(2023,3,31)
end_date = ql.Date(17,2,2031)
# end_date = ql.Date(31,3,2031)
swap_type = ql.ZeroCouponInflationSwap.Receiver
calendar = ql.TARGET()
day_count_convention = ql.ActualActual()
contract_observation_lag = ql.Period(3, ql.Months)
business_day_convention = ql.ModifiedFollowing
nominal = 10e6
fixed_rate = 2.44 / 100
ql.Settings.instance().evaluationDate = calc_date
# create inflation yield term structure and make relinkable object
inflation_yield_term_structure = ql.RelinkableZeroInflationTermStructureHandle()
inflation_index = ql.EUHICP(False, inflation_yield_term_structure)
# ois rates and create curve
tenor = [0.5, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 15.0, 20.0, 25.0, 30.0, 40.0, 50.0]
rates = [3.227, 3.3535, 3.168, 2.9945, 2.8645, 2.786, 2.7465, 2.7255, 2.7195, 2.722, 2.7325, 2.7615, 2.7845, 2.6993, 2.5608, 2.4373, 2.2565, 2.1525]
ois_curve = pd.Series(rates, index=tenor, name='Tenor')
short_curve = ois_curve[ois_curve.index < 1]
short_curve.index = short_curve.index * 12
long_curve = ois_curve[ois_curve.index >= 1]
helper = [ql.OISRateHelper(0,
ql.Period(int(tenor), Months),
ql.QuoteHandle(ql.SimpleQuote(rate / 100)),
ql.Eonia(),
ql.YieldTermStructureHandle(),
True)
for tenor, rate in short_curve.items()]
helper += [ql.OISRateHelper(0,
ql.Period(int(tenor), Years),
ql.QuoteHandle(SimpleQuote(rate / 100)),
ql.Eonia(),
ql.YieldTermStructureHandle(),
True)
for tenor, rate in long_curve.items()]
discount_curve = ql.PiecewiseLogCubicDiscount(0, ql.TARGET(), helper, ql.Actual365Fixed())
discount_curve.enableExtrapolation()
discount_handle = ql.RelinkableYieldTermStructureHandle()
discount_handle.linkTo(discount_curve)
# create inflation resets series and add to inflation_index ql object
inflation_resets = pd.Series(data=[104.05, 104.23, 104.77, 105.06, 104.94, 105.29, 104.9, 104.45, 104.54, 104.73,
104.35, 104.7, 104.88, 105.1, 106.09, 106.69, 106.97, 107.26, 107.16, 107.54,
108.06, 108.99, 109.49, 109.97, 110.3, 111.35, 114.12, 114.78, 115.74, 116.7,
116.83, 117.55, 118.99, 120.79, 120.7, 120.24, 119.96, 120.94],
index=pd.date_range(start='2020-01-01', periods=38, freq='M'))
def add_inflation_reset_rates():
for date, rate in inflation_resets.items():
date = ql.Date(date.day, date.month, date.year)
rate = rate / 100
inflation_index.addFixing(date, rate)
add_inflation_reset_rates()
# create inflation curve and make ql inflation curve object
data = {'Tenor': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 25, 30, 40, 50],
'Rates': [3.77, 3.025, 2.788, 2.6715, 2.607, 2.5705, 2.5473, 2.444, 2.444, 2.446, 2.463, 2.5515, 2.567, 2.6165, 2.6415, 2.7375, 2.7838]}
inflation_curve = pd.Series(data['Rates'], index=data['Tenor'])
inflation_rate_helpers = []
for tenor, rate in inflation_curve.iteritems():
maturity = calendar.advance(calc_date, ql.Period(int(tenor), ql.Years))
quote = ql.QuoteHandle(ql.SimpleQuote(rate / 100.0))
helper = ql.ZeroCouponInflationSwapHelper(quote,
contract_observation_lag,
maturity,
calendar,
business_day_convention,
day_count_convention,
inflation_index,
discount_handle)
inflation_rate_helpers.append(helper)
base_zero_rate = 0
inflation_curve = ql.PiecewiseZeroInflation(calc_date,
calendar,
day_count_convention,
contract_observation_lag,
ql.Monthly,
inflation_index.interpolated(),
base_zero_rate,
inflation_rate_helpers,
1.0e-12,
ql.Linear())
inflation_yield_term_structure.linkTo(inflation_curve)
# add seasonality to inflation_curve
seasonality_base_date = ql.Date(1, 1, 2010)
frequency = ql.Monthly
seasonality_factors = [0.991831,
1.004143,
1.010895,
1.003874,
1.002072,
1.002082,
0.995812,
1.001432,
1.004451,
1.003218,
0.999,
1.001887]
seasonality = ql.MultiplicativePriceSeasonality(seasonality_base_date, frequency, seasonality_factors)
inflation_curve.setSeasonality(seasonality)
swap = ql.ZeroCouponInflationSwap(swap_type,
nominal,
start_date,
end_date,
calendar,
business_day_convention,
day_count_convention,
fixed_rate,
inflation_index,
contract_observation_lag)
# inspect float cashflow
floating_leg = swap.inflationLeg()
for cashflow_float in floating_leg:
print(f'print inflation leg cashflows')
print(cashflow_float.date())
print(cashflow_float.amount())
# inspect fixed cashflow
fixed_leg = swap.fixedLeg()
for cashflow_fixed in fixed_leg:
print(f'print fixed leg cashflows')
print(cashflow_fixed.date())
print(cashflow_fixed.amount())
# price and print inflation swap price
swap_engine = ql.DiscountingSwapEngine(discount_handle)
swap.setPricingEngine(swap_engine)
print(swap.NPV())
```
|
Seasonality adjustment within Quantlib Zero Coupon Inflation Swap
|
CC BY-SA 4.0
| null |
2023-05-26T14:15:08.287
|
2023-05-26T14:15:08.287
| null | null |
16788
|
[
"quantlib",
"swaps",
"inflation",
"seasonality"
] |
75674
|
1
| null | null |
-1
|
52
|
Q "A company that made a profit of £300k with a net cash inflow of £200k is issuing new shares. It estimates that this will impact the cash flow statement with £100k in proceeds and £20 in dividends. It also expects tax to rise from £40k to £50k.
What is the maximum this company will have available for strategic investment?"
So I have (200k + 100k) -(50k + 20k) = 230k
My reasoning is 200k is said to be an inflow plus the 100k the company gets from issuing shares. While tax and dividends are the only negative cash flow impacts. Could someone please? I just want to understand, thank you.
|
Free Cash Flow Question
|
CC BY-SA 4.0
| null |
2023-05-26T17:52:04.603
|
2023-05-26T17:52:04.603
| null | null |
67570
|
[
"finance",
"investment"
] |
75675
|
2
| null |
75463
|
1
| null |
Here are some suggestions/random thoughts based on my own past experience debugging unexpected VaR values. They may help with Expected Shortfall (ES) as well.
Maybe the markets really behaved differently in-sample - the historical days that you use directly to calculate VaR or to calculate a covariance matrix - than out-of-sample - the days when you backtest the VaR v the P&L. Many VaR users exclude from their in-sample those days when some market factor moved more than some relatively large number of its historical standard deviations. To compensate, such atypical extreme shocks should be included among historical market stress scenarios instead, rather than in the VaR historical data.
Check whether the calculation uses punitive high volatilities for unrecognized market factors. In production environments, many VaR calculations sometimes encounter exposures to market factors that are not immediately recognized, for example the first time someone traded Bitcoin futures or a Kazakhstan cross-currency swap. :) Some "academic IT" calculators just stop right there, throw exceptions, and wait for the problems to be resolved the next day. But practically, some users of the VaR are eager to know how the new trades affected their VaR, even if the estimates err on the conservative side. Better engineered VaR calculators assume conservatively that the unrecognized factors have some very large volatility using some "wildcard" rules and no correlation to anything else, warn loudly that a data issue needs to be resolved. The "punitive" volatility needs to be large enough to incentivize VaR users to quickly replace it by historical volatilities, presumably lower, and correlations in the covariance matrix. If the warnings are not heeded, the punitive volatilities continue to be used, and the VaR is conservatively overstated.
Ideally, some new product approval process should try to ensure that the new market factors are known to the VaR calculator before they are first traded, but this isn't always practical.
Relatedly, market factors may be mislabeled, for example a senaitivity to a B-rated credit spread may be mislabeled as riskier unrated credit spread.
Monte Carlo perturbing market factors using pairwise correlations rather than principal components.
If you view the rates at various tenors of an intetrest rate curve as the market factors, and naively calculate their volatilities and correlations, and use Monte Carlo to generate some market movements, and look closely at the generated market movements, then many may strike you as not being realistic, and for some exposures, may give rise to unrealistic simulated P&L. The other extreme would be for the Monte Carlo to assume that rates are perfectlt correlater and change only in parallel. Perturbing (a sufficient number of) historical principal components is a better methodology.
Covariance matrix too sparse. If a portfolio is long two assets assumed to have correlation close to -1, or long one and short another assumed to have correlation close to 1, but the civariance matrix has 0 correlation, then the VaR would be overstated. A debugging tool useful to detect the absence of correlation is to have the VaR calculator print out its market factors in groups, with zero correlations between groups and non-zero correlations within groups.
Component VaR is another very helpful tool. Disaggregate the portfolio into as small pieces as possible to compare their VaR and Component VaR with their P&L, to see which pieces have VaR not matching P&L, and what they have in common. Disaggregate the VaR into component VaR by market factor types and individual market factors ans compare with Risk-Theoretical P&L (RTPL) attributing the P&L to the market factor. Large unexplained P&L (UPL) not explained by RTPL warrant investigation.
Whether the VaR calculator uses historical or Monte Carlo, it should be able to output the market scenarios that gave rise to P&L=VaR and the entire tail of P&Ls worse than VaR, filtering only the market factors that affect the portfolio under investigation, and showing their movements both in absolute terms and as the number of historical standard deviations. If these scenarios don't look realistic, then, perhaps, some historical dates need to be excluded from in-scope, or the covariance matrix needs to be more reality-like.
| null |
CC BY-SA 4.0
| null |
2023-05-26T17:56:06.867
|
2023-05-26T17:56:06.867
| null | null |
36636
| null |
75676
|
2
| null |
74924
|
2
| null |
Even though it is true that the volatility is constant in this setting, the relationship is valid for all terminal condition or pay-off function -- beyond the typical $(\pm(S-K))_+$ -- so long as the pay-off function is independent of the volatility. We can certainly write out the integral expressions of vega and gamma (of arbitrary pay-off functions) and find their relationship. But it seems simpler dealing with the PDE directly. Moreover, this methodology can be used to find other high order partial derivatives.
| null |
CC BY-SA 4.0
| null |
2023-05-26T21:16:36.917
|
2023-05-26T21:28:58.757
|
2023-05-26T21:28:58.757
|
6686
|
6686
| null |
75677
|
1
| null | null |
0
|
59
|
For focus, let us restrict the scope of this to vanilla options-based positions/strategies.
In a lot of the accounts that I've seen of those that engage in this sort of investment/trading strategy (Nassim Taleb, among others), from my view, the entry conditions seem relatively apparent, e.g., potentially, one either constantly holds an open position in very deep out-of-the-money short maturity options on, for example, a market index or uses some sort of model to identify when said options are underpriced and purchases accordingly.
What is left more ambiguous (and I realize that this is potentially due to this being proprietary/a source of their profits/etc.) is how one determines exits from these positions. I've seen mention of a somewhat heuristic/discretionary method of observing when your position increases greatly in value/an event that would possibly causes this occurs and then liquidating half of your position as quickly as possible, but I'm wanting to know if there is a more quantitative approach?
I've looked around and haven't really found much on this and am looking for references on either this explicitly or what sort of field/area of knowledge this may fall into? I know fields like Extreme Value Theory address things similar to this, but I'm not sure this would exactly without account for time dependencies, etc. Would there potentially be a non-probabilistic approach to this?
Apologies if this question is still not focused enough and any input/referencs is greatly appreciated, thanks.
|
Input/References on Generating Exit Signals for Positions that Profit Very Highly from Extreme and "Unpredictable" Events?
|
CC BY-SA 4.0
| null |
2023-05-26T22:04:59.560
|
2023-06-03T07:08:41.977
|
2023-06-03T06:26:37.680
|
67193
|
67193
|
[
"options",
"quant-trading-strategies",
"reference-request",
"option-strategies",
"event-study"
] |
75678
|
2
| null |
10873
|
0
| null |
CME is another derivatives market that trades power markets
| null |
CC BY-SA 4.0
| null |
2023-05-26T23:31:59.027
|
2023-05-26T23:31:59.027
| null | null |
19424
| null |
75680
|
2
| null |
10689
|
1
| null |
$$
\def\Filtr{\mathcal{F}}
\def\EF{E^\Filtr}
$$
Let $f=dQ/dP$, and denote by $E$, $E_Q$ the expectation
with respect to the measure $P$, $Q$, respectively.
Let us also write $\EF$, $\EF_Q$ instead of
$E(\cdot|\Filtr)$,
$E_Q(\cdot|\Filtr)$.
Assume that all random variables listed below are integrable,
in particular, that $E|\xi|$, $E|f\xi|$, $E|f^2\xi|<\infty$.
Let $\Filtr$ be any $\sigma$-field.
Thanks to self-adjointness property of conditional expectation ($E(\xi\EF\eta)=E(\eta\EF\xi)$), we have for every $A\in\Filtr$:
\begin{align*}
\newcommand{\eqby}[1]{\stackrel{\text{#1}}{=}}
E(\xi f\EF(f1_A)) &= E(f1_A\EF(\xi f)),\\
E_Q(\xi\EF(f1_A)) &= E_Q(\EF(\xi f)1_A),\\
E_Q(\xi(\EF f)1_A) &= E_Q(\EF(\xi f)1_A),\\
\EF_Q(\xi\EF f) &\eqby{a.s.} \EF\xi f,\\
(\EF f)(\EF_Q\xi) &\eqby{a.s.} \EF\xi f.
\end{align*}
Second equality follows from the definition of $f$,
third from the pull-out property
($\EF\xi\eta=\xi\EF\eta$, if $\xi$ is $\Filtr$-measurable)
and from $\Filtr$-measurability of $1_A$,
fourth from the definition of conditional expectation $\EF_Q$,
and the last one by the pull-out property,
as $\EF f$ is already $\Filtr$-measurable.
| null |
CC BY-SA 4.0
| null |
2023-05-27T08:36:32.940
|
2023-05-27T08:37:24.757
|
2023-05-27T08:37:24.757
|
67576
|
67576
| null |
75681
|
2
| null |
75671
|
4
| null |
There are several issues with your Python code:
- USD SOFR swaps have a settlement lag of two business days, see the T+2 under "Settlement" in your last screen shot from Bloomberg. So the first argument in your OISRateHelper must be 2, not 0.
- you should use the maturity_date of your OIS swap instruments when you print out your results, and not "calculation_date + tenor" since this does not take into account the settlement lag and holidays.
- the zero rate which Bloomberg shows, is derived from the discount factor with convention "continuously compounded", Actual365
With these and other modifications my code looks like this:
```
import QuantLib as ql
import math
calculation_date = ql.Date(26,5,2023)
ql.Settings.instance().evaluationDate = calculation_date
yts = ql.RelinkableYieldTermStructureHandle()
index = ql.OvernightIndex("USD Overnight Index", 0, ql.USDCurrency(), ql.UnitedStates(ql.UnitedStates.Settlement), ql.Actual360(), yts)
swaps = {
ql.Period("1W"): 0.05064,
ql.Period("2W"): 0.05067,
ql.Period("3W"): 0.05072,
ql.Period("1M"): 0.051021000000000004,
ql.Period("2M"): 0.051391,
ql.Period("3M"): 0.051745,
ql.Period("4M"): 0.05194,
ql.Period("5M"): 0.051980000000000005,
ql.Period("6M"): 0.051820000000000005,
ql.Period("7M"): 0.051584000000000005,
ql.Period("8M"): 0.05131,
ql.Period("9M"): 0.050924,
ql.Period("10M"): 0.050603999999999996,
ql.Period("11M"): 0.050121,
ql.Period("12M"): 0.049550000000000004,
ql.Period("18M"): 0.04558500000000001,
ql.Period("2Y"): 0.042630999999999995,
ql.Period("3Y"): 0.038952,
ql.Period("4Y"): 0.036976,
ql.Period("5Y"): 0.035919,
ql.Period("6Y"): 0.03535,
ql.Period("7Y"): 0.034998,
ql.Period("8Y"): 0.034808,
ql.Period("9Y"): 0.034738000000000005,
ql.Period("10Y"): 0.034712,
ql.Period("12Y"): 0.034801,
ql.Period("15Y"): 0.034923,
ql.Period("20Y"): 0.034662,
ql.Period("25Y"): 0.03375,
ql.Period("30Y"): 0.032826,
ql.Period("40Y"): 0.030834999999999998,
ql.Period("50Y"): 0.02896
}
rate_helpers = []
for tenor, rate in swaps.items():
helper = ql.OISRateHelper(2, tenor, ql.QuoteHandle(ql.SimpleQuote(rate)), index)
rate_helpers.append(helper)
curve = ql.PiecewiseFlatForward(calculation_date, rate_helpers, ql.Actual360())
yts.linkTo(curve)
engine = ql.DiscountingSwapEngine(yts)
print("maturity | market | model | zero rate | discount factor | present value")
for tenor, rate in swaps.items():
ois_swap = ql.MakeOIS(tenor, index, rate)
pv = ois_swap.NPV()
fair_rate = ois_swap.fairRate()
maturity_date = ois_swap.maturityDate()
discount_factor = curve.discount(maturity_date)
zero_rate = -math.log(discount_factor) * 365.0/(maturity_date-calculation_date)
print(f" {tenor} | {rate*100:.6f} | {fair_rate*100:.6f} | {zero_rate*100:.6f} | {discount_factor:.6f} | {pv:.6f}")
```
And my result for the first four grid points is, as in Bloomberg
```
maturity | market | model | zero rate | discount factor | present value
1W | 5.064000 | 5.064000 | 5.131807 | 0.998314 | -0.000000
2W | 5.067000 | 5.067000 | 5.132185 | 0.997332 | 0.000000
3W | 5.072000 | 5.072000 | 5.134266 | 0.996349 | 0.000000
1M | 5.102100 | 5.102100 | 5.157684 | 0.995066 | -0.000000
```
You might also take a look [here](https://quant.stackexchange.com/a/73526)
| null |
CC BY-SA 4.0
| null |
2023-05-27T09:00:52.460
|
2023-05-27T09:00:52.460
| null | null |
60662
| null |
75682
|
2
| null |
74924
|
4
| null |
Just want to add the observation that the pricing PDE solution can be formally written as
$$
C(\tau) = e^{\tau \mathcal H} C(0) \quad (*)
$$
where $\tau$ is time to maturity and $\mathcal H$ is a differential operator. For example, in the BS world with zero interest rate it is
$$
\mathcal H = \tfrac12 \sigma^2 S^2 \frac{\partial^2}{\partial S^2}
$$
Thus $U(\tau) = e^{\tau \mathcal H}$ is an 'evolution operator'.
In the BS case $\sigma$ is not a variable but a parameter. So you can differentiate both sides of equation (*) to very quickly obtain the vega gamma relation by noting that the operator $U(\tau)$ depends on the parameter $\sigma$. You could reinsert dividends and rates to also obtain sensitivities to $r$ and $q$ in the same manner.
In stochastic volatility models you can similarly show that the sensitivity of the option price to the correlation parameter is the stochastic volatility model vanna.
| null |
CC BY-SA 4.0
| null |
2023-05-27T11:00:56.550
|
2023-05-27T11:00:56.550
| null | null |
65759
| null |
75683
|
1
| null | null |
3
|
811
|
In [this](https://arxiv.org/abs/2202.05671) preprint on arXiv (a revised version of the one discussed in a post [here](https://quant.stackexchange.com/questions/69875/preprint-investigating-black-scholes-formula-correctness)) we show that there are three mathematical mistakes in the option pricing framework of Black, Scholes and Merton. As a result, the option pricing formula seems incorrect even under the idealized capital market assumptions of Black and Scholes. As the preprint shows in more detail, the three mathematical mistakes are:
i) The self-financing condition is misspecified (i.e., it does not express the concept of portfolio rebalancing without inflows or outflows of external funds);
ii) Even if one assumes that the self-financing condition is correctly specified (i.e., if one sidesteps mistake (i)), there is a circularity in the proof that Black and Scholes provide for their claim that a rebalanced portfolio of stocks and risk-free bonds can replicate an option;
iii) Even if one also assumes that the rebalanced portfolio replicates an option (i.e., if one sidesteps mistakes (i) and (ii)), the PDE of Black and Scholes implies that there are paths where the rebalanced portfolio is not self-financing or does not replicate an option.
To facilitate the discussion a little bit, let's focus on mistake (i) and set aside (ii) and (iii). Staying close to the notation of Black and Scholes, the preprint summarizes that derivations of the option pricing formula consider a replicating portfolio of $\alpha_{t}$ stocks with value $x_{t}$ and $\beta_{t}$ risk-free bonds with value $b_{t}$. These derivations define the value of this portfolio as:
\begin{equation}
w_{t}=\alpha_{t}x_{t}+\beta_{t}b_{t},
\end{equation}
and define the return as:
\begin{equation}\label{return}
\int_{0}^{t}dw_{s}=\int_{0}^{t}\alpha_{s}dx_{s}+\int_{0}^{t}\beta_{s}db_{s}.
\end{equation}
Since applying the product rule of stochastic integration to the portfolio value yields:
\begin{equation}\label{prsi}
\int_{0}^{t}dw_s=\int_{0}^{t}\alpha_{s}dx_s+\int_{0}^{t}d\alpha_{s}x_{s}+\int_{0}^{t}d\alpha_{s}dx_{s}+\int_{0}^{t}\beta_{s}db_{s}+ \int_{0}^{t}d\beta_{s}b_{s}+ \int_{0}^{t}d\beta_{s}db_{s},
\end{equation}
the above definition of the portfolio return implies that:
\begin{equation}\label{ctsfc}
\int_{0}^{t}d\alpha_{s}x_{s}+ \int_{0}^{t}d\alpha_{s}dx_{s}+\int_{0}^{t}d\beta_{s} b_{s}+ \int_{0}^{t}d\beta_{s}db_{s}=0,
\end{equation}
which is known as the continuous-time self-financing condition. This condition is believed to reflect that the portfolio is rebalanced without inflows or outflows of external funds, based on a motivation that goes back to [Merton (1971)](https://www.sciencedirect.com/science/article/abs/pii/B9780127808505500526). The preprint shows, however, that there is a timing mistake in the analysis of Merton, and that this mistake causes his self-financing condition to be misspecified. That is, the last equation does not reflect the concept of portfolio rebalancing without inflows or outflows of external funds (and the return on a portfolio that is rebalanced without inflows or outflows of external funds is therefore not equal to the second equation). Is our analysis of mistake (i) in the preprint correct, or do we make a mistake somewhere ourselves?
|
Three mathematical mistakes in Black-Scholes-Merton option pricing?
|
CC BY-SA 4.0
| null |
2023-05-27T18:12:36.860
|
2023-05-30T14:51:58.037
|
2023-05-30T10:19:05.450
|
67582
|
67582
|
[
"option-pricing",
"black-scholes",
"stochastic-calculus"
] |
75685
|
1
| null | null |
1
|
121
|
>
Let $H$ be an investment strategy in a discrete price model. Proof $H$ is self financing if and only if the following holds for the portfolio process $P_t$: $$P_t = P_0 + \sum_{s=1}^tH_{s-1}(X_s-X_{s-1}) \quad \forall t=1, \dots,T$$
---
$\textbf{Definition:}$ $H_t$ self financing strategy $\iff (\Delta H_t)^TX_{t-1}=0\ \forall t=1, \dots,T$.
We did not define what a portfolio process is so I guess the portfolio value process is meant here: $V=V(H)=H^TX$ with prices $X$.
I tried $$(\Delta H_t)^TX_{t-1}=0 \iff H_t^TX_{t-1}=H_{t-1}^TX_{t-1} \iff \Delta(H_t^T)_t=\Delta(H\circ X)_t \\
\iff H_t^TX_t=H_0^TX_0+(H\circ X)_t$$
$\forall t=1,...,T$. With $P_t:=H_t^TX_t$ I get $$P_t = P_0 + \sum_{s=1}^tH_s(X_s-X_{s-1})\ \forall t=1,...,T$$ but I need $H_{s-1}$ instead of $H_s$. I also tried integration by parts and got the same result... How do I proof the claim?
Thank you in advance!
|
Discrete self financing strategy
|
CC BY-SA 4.0
| null |
2023-05-27T19:54:02.813
|
2023-05-31T18:15:17.237
|
2023-05-31T18:15:17.237
|
848
|
67583
|
[
"finance-mathematics"
] |
75686
|
1
| null | null |
0
|
77
|
I have read somewhere the following statements, which I have observed to be true most of the time. I want to know how accurate they are mathematically, and how to prove them?
>
$\Gamma > 0$ is a good position to be in, and therefore you have to pay a premium for it, on the other hand $\Gamma < 0$ is a bad position to be in so you get paid premium for it.
>
$\theta$ and $\Gamma$ have an opposite relationship i.e. if, let's say $\theta$ is positive, then $\Gamma$ will be negative and vice-versa. $\theta$ is like the rent you pay to be long $\Gamma$
Are these statements model-free i.e. can they be proven, without assuming anything about the stochastic process that the underlying stock price follows. Is this it, or is there any deeper relationship between $\theta$ and $\Gamma$?
|
Relationship between gamma and theta
|
CC BY-SA 4.0
| null |
2023-05-27T22:43:10.273
|
2023-05-27T22:43:10.273
| null | null |
67584
|
[
"option-pricing",
"greeks"
] |
75687
|
2
| null |
37504
|
0
| null |
to solve the problem "correlation does not necessarily imply causation" --
[Granger Causality Test in Python](https://www.machinelearningplus.com/time-series/granger-causality-test-in-python/) is used & shows if X & its lags are forecasting Y, meaning X & Y to be Cause & Effect, - so the essence: fitting [VAR](https://stats.stackexchange.com/questions/304594/interpretation-of-granger-test-outputs-in-r). But not analysing lags of one timeseries (e.g. X). In order to measure what the lag is in one timeseries you should see Autocorrelation factor.
More precise ([here](https://www.machinelearningplus.com/time-series/granger-causality-test/)):"We only test if X (and lags of X) is helpful in explaining Y, and thereby help forecasting it. So we are not concerned about the true causal relationship between the variables. ". And important: ts_s should be stationary, meaning mean & var not changing in time
[There](https://www.statsmodels.org/stable/_modules/statsmodels/tsa/stattools.html#grangercausalitytests) are [4 tests](https://stackoverflow.com/a/75148401/15893581) for granger non causality of 2 time series, aiming to hypothesis of causality testing ([interpretation](https://stats.stackexchange.com/a/477824/347139)) -- & can apply either all of them or any - for approving your [conclusion](https://stackoverflow.com/a/56360565/15893581) done
>
H0 : X does not granger cause Y, H1 : X does granger cause Y, if
p-value > 0.05 then H0 is accepted
| null |
CC BY-SA 4.0
| null |
2023-05-28T05:54:27.567
|
2023-05-28T06:14:07.020
|
2023-05-28T06:14:07.020
|
67585
|
67585
| null |
75688
|
1
| null | null |
2
|
95
|
Background: I'm building a trading system for the crypto market with Python, and currently having problems on how to effectively save my real time orders/trades to disk, so that I could monitor more easily or do further analysis afterwards.
The current design is as follows:
For each asset I have a Strategy class, with a Position class bound to it. The strategy manages a websocket connection to the exchange, and when the strategy receives any order update, it will update its position. Simple code snippet is like:
```
class Position():
def __init__():
self.orders = {}
...
def update(self, order_info):
... # process position
self.orders[order_info['id']] = order_info # update
class Strategy():
def __init__():
self.name = ''
self.position = Position()
def on_order_update(self, order_info):
self.position.update(order_info)
```
Now, all orders are saved in memory in the position class of each strategy, and I want to also save them to disk (maybe a database?). What I have in mind is to simply write the order status to a database every time the strategy receives it. Therefore I would start a db connection when initializing the position class, and add few lines to write to database in its update method.
However, I'm wondering whether there are better ways to achieve this, and I'm mostly worried about the performance of the trading system. I'm mostly doing intraday trading and I would expect each strategy to have around 1 to 2 orders per second, and would trade on up to 100 assets across exchanges at the same time.
My approach mentioned above would then create and manage a lot of connections to the database (also need to deal with reconnections). Also the writing is executed in the same process of the strategy so it might stall the strategy if writing takes long time? Since I do not need the orders data to be instantaneously available (a delay of seconds is not really a problem), maybe it's a better idea to have another process to read the orders' info of all strategies and write them periodically to the database? Or cache the data in redis first and then persist to disk?
Question: What is a more effective approach for my senario?
Disclaimer: I have very limited knowledge on trading system engineering so any suggestions or pointers are more than welcome.
|
Effective way to persist strategy real time orders to database?
|
CC BY-SA 4.0
| null |
2023-05-28T07:03:00.120
|
2023-05-29T17:36:20.197
| null | null |
42028
|
[
"algorithmic-trading",
"database",
"trading-systems"
] |
75689
|
1
| null | null |
-3
|
44
|
python code to Draw a curve for DV01 risk of a puttable bond, which is due to mature at 30 years and have a feature of put at 15 years. Where will the maximum exposure be? Shall this bond be sold at premium or discount compared to a bond without optionability.
Write the DV01 function in python and take the interest rate curve for LIBOR.
|
python code for DV01
|
CC BY-SA 4.0
| null |
2023-05-28T10:16:54.090
|
2023-05-28T10:16:54.090
| null | null |
67590
|
[
"programming"
] |
75690
|
2
| null |
75688
|
2
| null |
Some approaches I can think of.
- Let the hardware capture all incoming and outgoing traffic and filter out the orders and order updates. This has the advantage that it shouldn't come with any added latency and removes this concern from your strategy code but requires more hardware, software and operational work.
- Capture all incoming and outgoing packages and process them in a separate process. This doesn't require any changes to the hardware but still introduces more software which needs to be maintained.
The options above would correspond roughly to the `NoopConsumer()` approach in the code below.
- (Not tested) Share memory using mmap and append only the data you want share from your strategy process and read from another process to keep up to date.
- Write async as done below, it's probably faster if the overhead of writing is larger than of context switching which seems likely but needs to be experimented with.
- Have blocking writes, maybe if you can write really fast.
[This code](https://gist.github.com/bobjansen/0ad44c508baa9799937b39d3b6c6485a) was built from scratch today and I'm not a regular `asyncio` user so it can probably be improved. I think it is a good starting point nonetheless.
```
"""
A simple experiment to time different ways to save orders
Start the servers with
> python main.py server 5
and start the experiment with
> python main.py client 5
Servers can be reused.
Example output:
Connecting to 5 exchanges with sync writer
Connecting to 8080
Connecting to 8081
Connecting to 8082
Connecting to 8083
Connecting to 8084
Took: 5.667398263s
Took: 5.693023433s
Took: 5.707320206s
Took: 5.716781134s
Took: 5.720934913s
Connecting to 5 exchanges with async writer
Connecting to 8080
Connecting to 8081
Connecting to 8082
Connecting to 8083
Connecting to 8084
Took: 5.684973048s
Took: 5.700082285s
Took: 5.709185018s
Took: 5.713764676s
Took: 5.721333215s
Connecting to 5 exchanges with NoopConsumer
Connecting to 8080
Connecting to 8081
Connecting to 8082
Connecting to 8083
Connecting to 8084
Took: 2.466461159s
Took: 2.553055324s
Took: 2.563424393s
Took: 2.563971055s
Took: 2.60299299s
Connecting to 5 exchanges with SqliteConsumer synced writes
Connecting to 8080
Connecting to 8081
Connecting to 8082
Connecting to 8083
Connecting to 8084
Took: 10.003863398s
Took: 10.026773816s
Took: 10.058823117s
Took: 10.08572986s
Took: 10.12419061s
Connecting to 5 exchanges with SqliteConsumer async writes
Connecting to 8080
Connecting to 8081
Connecting to 8082
Connecting to 8083
Connecting to 8084
Took: 9.547965043s
Took: 9.583894856s
Took: 9.603546447s
Took: 9.623569694s
Took: 9.648144184s
"""
import asyncio
import sqlite3
import sys
import time
import numpy as np
from websockets.server import serve
from websockets import connect
DEBUG = False
# Server
hostname = "localhost"
base_port = 8080
average_wait_time = 2
num_orders = 1000
p = 0.5
speed = 1000
seed = 42
# Client
sync_overhead = 0.002
async_overhead = 0.002
np.random.seed(seed)
def print_settings():
print(
f"""Settings:
hostname: {hostname}
base_port: {base_port}
num_orders: {num_orders}
p: {p}
speed: {speed}
seed: {seed}"""
)
class NoopConsumer:
"""Takes a message and does nothing"""
def write(self, exchange_id, order_id):
pass
class Consumer:
"""Consumes and writes a message and adds some overhead"""
def __init__(self, overhead):
self.overhead = overhead
self.messages_received = []
def write(self, exchange_id, order_id):
self.messages_received.append(f"{exchange_id}|{order_id}")
time.sleep(self.overhead)
async def async_write(self, exchange_id, order_id):
self.messages_received.append(f"{exchange_id}|{order_id}")
time.sleep(self.overhead)
class SqliteConsumer:
"""Non-thread safe writer to a sqlite3 in 'test_db.sqlite3'"""
def __init__(self):
self.con = sqlite3.connect("test_db.sqlite3")
self.cur = self.con.cursor()
self.cur.execute("DROP TABLE IF EXISTS orders")
self.cur.execute("CREATE TABLE orders(exchange_id int, order_id int)")
def write(self, exchange_id, order_id):
self.cur.execute(
f"INSERT INTO orders (exchange_id, order_id) VALUES ({exchange_id}, {order_id})"
)
self.con.commit()
async def async_write(self, exchange_id, order_id):
self.cur.execute(
f"INSERT INTO orders (exchange_id, order_id) VALUES ({exchange_id}, {order_id})"
)
self.con.commit()
class Exchange:
"""
Bare functionality to model an exchange
An exchange listens on hostname:port and sends exchange_id:order_id over a
websocket at random intervals.
"""
def __init__(self, exchange_id, hostname, port, average_wait_time):
self.exchange_id = exchange_id
self.hostname = hostname
self.port = port
self.wait_times = np.random.poisson(average_wait_time, num_orders)
print(f"Creating {self.exchange_id} on {self.hostname}:{self.port}")
print(f"Average wait time param: {average_wait_time}")
print(f"Average wait time: {self.wait_times.sum() / (speed * num_orders)}")
print(f"Total wait time: {self.wait_times.sum() / speed}")
async def order_feed(self, websocket):
async for message in websocket:
if message == "start":
start = time.time_ns()
send_time = 0
for i, wait_time in enumerate(self.wait_times):
await asyncio.sleep(wait_time / speed)
start_send = time.time_ns()
await websocket.send(f"{self.exchange_id}:{i}")
send_time += time.time_ns() - start_send
await websocket.send("done")
print(
f"Took {(time.time_ns() - start) / 1e9}s to send all orders on {self.exchange_id}"
)
print(f"Total send time: {send_time / 1e9}s")
print_settings()
async def run(self):
async with serve(self.order_feed, self.hostname, self.port):
await asyncio.Future()
class Client:
"""Connects to an exchange and records some results
The feed is started with 'start' and stopped when the message 'done' is
recieved. A coin flip is performed to decide whether the message is saved.
"""
def __init__(self, port, writer, write_async):
self.port = port
self.writer = writer
self.write_async = write_async
async def run(self):
print(f"Connecting to {self.port}")
async with connect(f"ws://{hostname}:{self.port}") as websocket:
await websocket.send("start")
start = time.time_ns()
async for message in websocket:
if message == "done":
break
if np.random.choice([True, False]):
exchange_id, order_id = message.split(":")
if self.write_async:
await self.writer.async_write(exchange_id, order_id)
else:
self.writer.write(exchange_id, order_id)
print(f"Took: {(time.time_ns() - start) / 1e9}s")
class Strategy:
"""A strategy holds mulitple connections"""
def __init__(self, clients):
self.clients = clients
async def run_all(self):
async with asyncio.TaskGroup() as tg:
for client in self.clients:
tg.create_task(client.run())
if __name__ == "__main__":
if len(sys.argv) < 3:
print("Provide either 'server' or 'client' as argument and a count")
else:
arg = sys.argv[1]
num_servers = int(sys.argv[2])
if arg == "server":
print_settings()
async def run_servers(num_servers):
async with asyncio.TaskGroup() as tg:
for i in range(num_servers):
exchange = Exchange(
i + 1, hostname, base_port + i, average_wait_time
)
tg.create_task(exchange.run())
asyncio.run(run_servers(num_servers))
elif arg == "client":
print(f"Connecting to {num_servers} exchanges with sync writer")
clients = [
Client(base_port + i, Consumer(sync_overhead), False)
for i in range(num_servers)
]
asyncio.run(Strategy(clients).run_all())
if DEBUG:
print("\n".join(strategy.writer.messages_received))
print(f"Connecting to {num_servers} exchanges with async writer")
clients = [
Client(base_port + i, Consumer(async_overhead), True)
for i in range(num_servers)
]
asyncio.run(Strategy(clients).run_all())
if DEBUG:
print("\n".join(strategy.writer.messages_received))
print(f"Connecting to {num_servers} exchanges with NoopConsumer")
clients = [
Client(base_port + i, NoopConsumer(), False) for i in range(num_servers)
]
asyncio.run(Strategy(clients).run_all())
if DEBUG:
print("\n".join(strategy.writer.messages_received))
print(
f"Connecting to {num_servers} exchanges with SqliteConsumer synced writes"
)
clients = [
Client(base_port + i, SqliteConsumer(), False)
for i in range(num_servers)
]
asyncio.run(Strategy(clients).run_all())
print(
f"Connecting to {num_servers} exchanges with SqliteConsumer async writes"
)
clients = [
Client(base_port + i, SqliteConsumer(), True)
for i in range(num_servers)
]
asyncio.run(Strategy(clients).run_all())
else:
print(f"Unknown arg '{arg}', exiting")
```
| null |
CC BY-SA 4.0
| null |
2023-05-28T19:36:56.513
|
2023-05-29T17:36:20.197
|
2023-05-29T17:36:20.197
|
848
|
848
| null |
75691
|
1
| null | null |
0
|
64
|
I'm reading M. S. Joshi's paper "Log-Type Models, Homogeneity of Option Prices and Convexity", and I'm having problem understanding equation 1:
$$
C(S_0, K) = \int (S - K)_+ \Phi\left(\frac{S}{S_0}\right)\frac{dS}{S}
$$
Could anyone help? I can only write down
$$
C(S_0, K) = \int (S - K)_+ \phi(Z)dZ
$$
where $S = S_0 e^{(r-\frac12\sigma^2)t - \sigma \sqrt{t}Z}$ and $Z$ being the standard normal random variable, but I don't see the connection here.
|
Homogeneity of BS Formula
|
CC BY-SA 4.0
| null |
2023-05-29T00:46:15.923
|
2023-05-29T00:46:15.923
| null | null |
55715
|
[
"options",
"black-scholes"
] |
75692
|
2
| null |
75685
|
2
| null |
It is enough to consider one time step. The portfolio value changes by
\begin{align}
P_t-P_{t-1}&=H_t^\top X_t-H^\top _{t-1}X_{t-1}\\
&=\underbrace{H_t^\top(X_t-X_{t-1})}_{\textstyle(A)}+\underbrace{(H_t^\top-H^\top_{t-1})X_{t-1}}_{\textstyle (B)}\,.
\end{align}
The term $(B)$ reflects changes in the portfolio value that are due to reallocations and/or withdrawals resp. additions of assets to the portfolio with newly added funds. The first term is the value change due to
the assets having new values at the end of the period.
When the strategy $H$ is self-financing only reallocations are allowed, that is, $B$ must be zero. Nothing is allowed that adds or withdraws value from the porfolio. An example with two assets having prices
$$
X^1_{t-1}=100\,,\quad X^2_{t-1}=80
$$
and current allocations
$$
H^1_{t-1}=8\,,\quad H^2_{t-1}=10\,.
$$
Then you are allowed to sell five shares of the second asset to buy four of the first:
$$
H^1_t-H^1_{t-1}=4\,,\quad H^2_t-H^2_{t-1}=-5\,.
$$
Conclusion: $H$ is self-financing if and only if one of the two equivalent
conditions hold
- $P_t-P_{t-1}=H_t^\top(X_t-X_{t-1})\,,$
- $(H_t^\top-H^\top_{t-1})X_{t-1}=0\,.$
| null |
CC BY-SA 4.0
| null |
2023-05-29T05:35:16.417
|
2023-05-29T10:49:18.933
|
2023-05-29T10:49:18.933
|
16148
|
58786
| null |
75693
|
1
| null | null |
0
|
51
|
I am looking for references or practical solutions for the following. In the usual factor approach for equities with panel data regression (for each stock, explain future returns given stock characteristics), how do I add industry specific models?
Example: I have 500 stocks and some stock factors are general like the standard Fama-French factors or momentum (apply to all 500) while others are industry specific eg factors only applicable for Energy stocks. If possible I'd like to allow groups to be overlapping, eg some features are Technology, others Software, etc, but the non-overlapping case is useful too.
One approach is to include all factors. A drawback is the group specific factors will be undefined - would need to define eg average value over group: this doesn't seem very statistically efficient.
A more developed solution would be some Bayesian hierarchical model. However, implementation is more tricky - it may not be worth it compared to a simpler approach where I'd fit a model independently for each group then combine models with a heuristic.
Any reference in the equities literature, or suggestion that is more practical than a full blown Bayesian model?
|
equities industry factor models
|
CC BY-SA 4.0
| null |
2023-05-29T09:16:35.200
|
2023-05-29T09:16:35.200
| null | null |
66636
|
[
"equities",
"regression",
"fama-french",
"paneldata"
] |
75694
|
2
| null |
75683
|
8
| null |
Don't take this as an answer per se, but as mentioned in my comment more a summary of imo Bjork's clear explanation that hopefully can convince you there is nothing wrong with the BS PDE and self-financing portfolio, even though the original Black-Scholes derivation may leave room for some doubt.
So let's assume that the market under $\mathbb P$ is
$$
dS(t) = \mu S(t) dt + \sigma S(t) dW(t) \\
dB(t) = rB(t) dt
$$
with $W(t)$ a standard Brownian motion, and following BMS' original assumptions $r, \sigma$ are constants.
The crux is I believe the following lemma:
Lemma
Assume there exists a scalar process $F(t)$ such that
$$
\frac{dF(t)}{F(t)} = w_B(t) \frac{dB(t)}{B(t)} + w_S(t) \frac{dS(t)}{S(t)}
$$
where $w_B, w_S$ are adapted, and for all $t$
$$
w_B(t) + w_S(t) = 1
$$
Then the process defined by
$$
V(t) = h_B(t) B(t) + h_S(t) S(t) \\
h_B(t) = w_B(t) \frac{ F(t)}{B(t)},\; h_S(t) = w_S(t) \frac{ F(t)}{S(t)}
$$
is self-financing and $V(t) = F(t)$ for all $t$.
Proof
That $V(t) = F(t)$ for all $t$ is clear from the definition, but that doesn't mean it's self-financing. But
\begin{align}
dV(t) &= d\left[ w_B(t) \frac{ F(t)}{B(t)} B(t) + w_S(t) \frac{ F(t)}{S(t)} S(t) \right] \\
&= d [w_B(t) F(t) + w_S(t)F(t) ] \\
&= dF(t)
\end{align}
because $w_B(t) + w_S(t) = 1$ for all $t$.
Now, it's pretty clear I think that the following Theorem holds (I'll use subscripts to denote partial derivatives):
Theorem Given the market under $\mathbb P$ as above and define $F$ as the solution to
$$
F_t + rSF_S + \tfrac12 \sigma^2 S^2 F_{SS} = rF \quad (*)\\
F(T,S(T)) = \Phi(S(T)
$$
then the process $V(t) = h_B(t) B(t) + h_S(t) S(t)$ with
$$
h_B(t) = \frac{F(t) - S(t)F_S(t)}{B(t)}, \; h_S(t) = F_S(t)
$$
is self-financing and for all $t$ we have $V(t) = F(t)$.
Proof
Again it is clear that $V(t) = F(t)$, and we just need to demonstrate that it is self-financing. By an application of Ito's lemma we can write
$$
\frac{dF}{F} = \frac{F_t + \tfrac12 \sigma^2 S^2 F_{SS}}{rF} \frac{dB}{B} + \frac{SF_S}{F} \frac{dS}{S}
$$
Since $F$ satisfies the PDE (*) we can write this as
$$
\frac{dF}{F} = \frac{rF - rSF_S}{rF} \frac{dB}{B} + \frac{SF_S}{F} \frac{dS}{S}
$$
So it is clear that
$$
\frac{rF - rSF_S}{rF} + \frac{rSF_S}{rF} = 1
$$
and therefore $V$ is self-financing as per the Lemma above.
| null |
CC BY-SA 4.0
| null |
2023-05-29T09:20:38.620
|
2023-05-29T09:34:56.467
|
2023-05-29T09:34:56.467
|
65759
|
65759
| null |
75695
|
2
| null |
75688
|
1
| null |
If I may suggest, [QuestDB](https://questdb.io/docs/) is a time-series database written exactly for these use cases, with many users on the financial industry. Ingesting data can be done using any postgresql compatible library but not really efficient for high rate. For higher frequency you can ingest data using the ILP protocol, which is available via a [native Python client](https://py-questdb-client.readthedocs.io/en/latest/).
An example for ingestion:
```
from questdb.ingress import Sender
with Sender('localhost', 9009) as sender:
sender.row(
'trades',
symbols={'symbol': 'ETH-USD', 'side': 'buy'},
columns={'prices': 2615.54 'volume': 0.00044})
sender.flush()
```
Queries can be executed directly using any postgresql compatible library, like psycopg or sqlalchemy. An example of a candlechart at 15m intervals could be
```
SELECT
timestamp,
first(price) AS open,
last(price) AS close,
min(price),
max(price),
sum(amount) AS volume
FROM trades
WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now())
SAMPLE BY 15m ALIGN TO CALENDAR;
```
This example can be run at the public demo instance [https://demo.questdb.io](https://demo.questdb.io) using the public trades dataset, which is updated with data from coinbase every second or so.
You can use QuestDB self hosted (apache 2.0 license) or as a managed service using the [QuestDB Cloud](https://cloud.questdb.com) (with a free tier of $200).
| null |
CC BY-SA 4.0
| null |
2023-05-29T09:21:01.280
|
2023-05-29T09:21:01.280
| null | null |
67602
| null |
75696
|
1
| null | null |
2
|
40
|
As part of my research for my masters thesis, I'm testing out the effectiveness of some different models in Monte Carlo simulations for path dependant options.
I will be attempting a model-free approach to simulating price paths so these will simply be used as a benchmark. I want to choose a few models which are valid and commonly used for MC simulations today.
So far I have implemented Black-Scholes (the classic), Merton's Jump Diffusion model and a Heston stochastic volatility model. Are these commonly used in practice today? Are there any other models which should be included?
|
Benchmark Model for Path-Dependant Monte Carlo Simulations?
|
CC BY-SA 4.0
| null |
2023-05-29T09:30:02.080
|
2023-05-29T09:30:02.080
| null | null |
67562
|
[
"monte-carlo",
"asset-pricing",
"sde",
"best-practices"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.