Id
stringlengths
1
5
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
5
ParentId
stringlengths
1
5
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
27.8k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
5
OwnerUserId
stringlengths
1
5
Tags
list
75436
1
null
null
4
106
How can I compute the derivative of the payoff function for an American put option? In the paper ["Smoking adjoints: fast Monte Carlo Greeks" by Giles and Glasserman (2006)](https://www0.gsb.columbia.edu/faculty/pglasserman/Other/RiskJan2006.pdf) they compare two methods to calculate pathwise derivatives: - Forward method - Adjoint method Both of these methods requires the derivative of the payoff function wrt. the parameter. E.g. to approximate the delta, one needs to compute $$\frac{\partial g}{\partial X(0)}$$ where $g$ is the payoff function and $X(0)$ is the (spot) value of the underlying at time 0. However, they do not specify how this is done for American options. I am concerned that it very well might depend on the optimal stopping time $\tau^*$.
Pathwise sensitivities of American options - Derivative of the American payoff function
CC BY-SA 4.0
null
2023-05-04T15:12:44.913
2023-05-04T15:12:44.913
null
null
56812
[ "greeks", "american-options", "simulations", "payoff", "sensitivities" ]
75437
1
null
null
1
100
I have learnt that the Sharpe ratio is a measure of the annualized return rate mean over the annualised standard deviation of return rate distribution. I also learnt that when compounding, the mean of the return rate distribution does not correspond to the overall return rate at the end of the test period (the classic example is : I have 100 USD, then I loose 50%, then I gain 50% I end up with 75 USD which is an overall return of -25%, while return mean is 0%). Since the return mean does not correspond to reality in most of the case (i.e., when the return is compounded), why the Sharpe ratio does not take the cumulative return (i.e, exp(sum of log returns)) as a numerator rather than the mean of return rates ? Please note that I've made a lot of research on Google and StackExchange and there seem not to be a definitive standard response to this question.
Why isn't the Sharpe Ratio computed on the cumulative return rather than return mean?
CC BY-SA 4.0
null
2023-05-04T15:47:29.777
2023-05-10T17:04:22.080
2023-05-10T17:04:22.080
63143
63143
[ "sharpe-ratio" ]
75438
2
null
75437
1
null
Sharpe uses log returns, not simple. The log return of 50/100 = -0.6931 The log return of 75/50 = 0.4054 The average is = -0.1438. This is what Sharpe uses.
null
CC BY-SA 4.0
null
2023-05-04T16:29:09.797
2023-05-04T16:29:09.797
null
null
66963
null
75439
1
null
null
4
357
From what I understand, orders in some dark pools are typically matched at the midpoint price, and matching prioritized in size, price, time. Market orders execute at the midpoint. Limit orders execute at the midpoint only if the midpoint is equal to or better than the specified limit price. Given the aforementioned criteria, consider the following dark pool orderbook... |Buy Qty |Buy Px |Sell Px |Sell Qty | |-------|------|-------|--------| |200 |10.20 |10.25 |300 | |100 |10.15 |10.30 |500 | |400 |10.10 |10.35 |200 | |300 |10.05 |10.40 |100 | The midpoint price of the dark orderbook is calculated as follows using the top of the orderbook; i.e. (10.20 + 10.25) / 2 = 10.225 However, dark orderbooks obtain their midpoint price from visible orderbooks, and match orders based on the visible orderbook's midpoint price. Questions: - Do the Buy orders only match at the midpoint price, or can they match lower than the midpoint price ? - Do the Sell orders only match at the midpoint price, or can they match higher than the midpoint price ? - If a Buy Market order with quantity 500 was entered, which Sell order(s) would it match against, and at what price ? - If a Buy Limit order with quantity 500 and price 10.30 was entered, which Sell order(s) would it match against, and at what price ? - What other examples of new Buy orders entered into this orderbook would cause matches ?
Dark Pool Midpoint Order Matching
CC BY-SA 4.0
null
2023-05-04T16:29:20.497
2023-05-05T09:12:41.160
2023-05-05T09:12:41.160
67281
67281
[ "orderbook" ]
75440
1
null
null
0
65
I am trying to evaluate the present value of some cashflows and QuantLib does not return the discount factors that I am expecting. I have a Risk Free (Zero Coupon Bond) Yield curve: ``` import QuantLib as ql dates = [Date(1,12,2022), Date(2,12,2022), Date(1,1,2023), Date(1,2,2023), Date(1,3,2023), Date(1,4,2023), Date(1,5,2023), Date(1,6,2023)] rates = [0.0, 0.0059, 0.0112, 0.0160, 0.0208, 0.0223, 0.0239, 0.0254] ``` So I create a QuantLib ZeroCurve: ``` discount_curve_day_count = ql.ActualActual(ql.ActualActual.ISDA) discount_curve_compounding_frequency = ql.Annual discount_curve_compounding_type = ql.Compounded calendar = ql.NullCalendar() zero_curve = ql.ZeroCurve(dates,rates, discount_curve_day_count,calendar, ql.Linear(),discount_curve_compounding_type,discount_curve_compounding_frequency) ``` I define the Leg of Cashflows: ``` cf_dates = [Date(18,1,2023), Date(18,2,2023), Date(18,3,2023), Date(18,4,2023), Date(18,5,2023), Date(18,6,2023), Date(18,7,2023), Date(18,8,2023), Date(18,9,2023), Date(18,10,2023), Date(18,11,2023), Date(18,12,2023), Date(18,1,2024), Date(18,2,2024), Date(18,3,2024), Date(18,4,2024), Date(18,5,2024), Date(18,6,2024), Date(18,7,2024), Date(18,8,2024), Date(18,9,2024), Date(18,10,2024), Date(18,11,2024), Date(18,12,2024), Date(18,1,2025)] cf_amounts = [-30000.0, 203.84, 184.11, 203.84, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37, 634.37,634.37, 634.37, 634.37] cf= [] for i in range(len(cf_dates)): cashflow = ql.SimpleCashFlow(cf_dates[i], cf_amounts[i]) cf.append(cashflow) leg = ql.Leg(cf) ``` And I want to evaluate the discount factors at the cashflow dates: ``` for cf in leg: print(cf.date(), zero_curve.discount(cf.date())) ``` Unfortunately, the values I get are slightly off (error=0.02) from the expected ones, calculated using the following compounding formula: $$ d = \frac{1}{(1+r)^y \left( 1+r\frac{d_p}{d_y} \right)} $$ where $r$ is the linearly interpolated rate from the curve, $y$ is the number of years that have passed from the first cashflow date, $d_p$ is the number of days that have passed from the previous payment and $d_y$ the number of days in the year in which the payment occurs (all of these are calculated using Act/Act ISDA daycount convention). Any chance I can get those numbers right using QuantLib?
QuantLib: Getting the present value of a leg of cashflows using a 'risk-free' yield curve
CC BY-SA 4.0
null
2023-05-04T16:35:50.373
2023-05-12T07:10:37.223
2023-05-12T07:10:37.223
67268
67268
[ "quantlib", "discount-factor-curve", "risk-free-rate" ]
75441
1
null
null
2
102
This question may be not be very relevant to quantitative finance, but I guess fixed-income modellers may encounter this some time as well. The question is about the days to settlement for US corporate bond transactions. My current understanding is that they used to be settled as T+3 before Sep 4, 2017. In 2017, SEC has a rule requiring brokers to switch from T+3 to T+2, effective starting from Sep 5, 2017. In other words, after Sep 5, 2017, corporate bond transactions are settled as T+2. In addition, I also checked the data field `stlmnt_dt` in the [TRACE Enhanced Database (post-2/6/12)](https://www.finra.org/filing-reporting/trace/historic-data-file-layout-post-2612-files) and find that the majority of daily transactions (~90%) are settled T+3 before Sep 5, 2017 and that the majority are settled T+2 starting from Sep 5, 2017. However, I would still like to ask corporate bond traders to confirm my findings.
Days to settlement for US corporate bonds
CC BY-SA 4.0
null
2023-05-04T16:52:23.277
2023-05-04T19:46:12.380
null
null
33538
[ "fixed-income", "bond", "trading" ]
75442
2
null
75441
2
null
Not technically a corporate bond trading source but the following SIFMA (Securities Industry and Financial Market Association; the leading trade association for institutional participants in the U.S. and global capital markets) publications are consistent with your findings. [SIFMA:Shortening the Settlement Cycle](https://www.sifma.org/wp-content/uploads/2017/08/t2_industry_webinar_20170720.pdf) [SIFMA:T+2 Settlement Update](https://www.sifma.org/wp-content/uploads/2017/09/Sep-8-T2-Update-Fact-Sheet.pdf)
null
CC BY-SA 4.0
null
2023-05-04T19:46:12.380
2023-05-04T19:46:12.380
null
null
50071
null
75443
2
null
3901
1
null
A number of economic theorems relevant for large financial institutions depend on the truth of the Riemann Hypothesis. They include things like uniqueness results of mixed-strategy game theory equilibria, demonstration that various policies converge and are therefore (eventually) equivalent along some axes, and limiting the potential downside of classes of investment or policy. The practical policies that depend on RH being true are obviously not going to be massively affected (unless it's proven false), but the policy-choosers will gain substantial peace of mind from upgrading their assumption to a certainty.
null
CC BY-SA 4.0
null
2023-05-05T03:14:44.377
2023-05-05T03:14:44.377
null
null
67289
null
75444
1
null
null
0
34
I'm new to quantitative finance and interested in performing a PCA on the implied volatility surface. However, my dataset displays certain point changes over time. As a result, I need to interpolate the data to obtain fixed points on the surface with fixed moneyness and maturity. How do I do this for each calendar day while guaranteeing no arbitrage?
Implied Volatility Surface Interpolation for fixed moneyness and maturity on each day of the calendar
CC BY-SA 4.0
null
2023-05-05T06:24:38.523
2023-05-05T06:24:38.523
null
null
67293
[ "options", "black-scholes", "implied-volatility", "no-arbitrage-theory", "pca" ]
75445
1
null
null
3
132
If I am testing a trend-following strategy, should I detrend the data before applying the rules or should I generate signals based on the original price series but use detrended data for performance evaluation ? If I use a trend-following strategy with detrended data, it may not generate signals as it would with real price series.
Detrending price series for back testing
CC BY-SA 4.0
null
2023-05-05T08:15:23.167
2023-05-06T13:26:49.643
2023-05-05T08:31:25.457
16148
41582
[ "backtesting", "bootstrap" ]
75446
2
null
75445
6
null
Depends on what the goal is. If you want to backtest a priced based signal (e.g. RSI, SMA Crossovers, Bollinger Bands or other technical indicators) then it wouldn’t make much sense to detrend the time series just for the sake of it. The backtest doesn’t really care about the nature of the signal as long as you don’t use information unavailable at the time of financial decision making (i.e. future data). Detrending becomes important for statistical analysis and inference. For example if you want to answer something like “how well does signal $x_t$ predict returns of series $y$ at $t+1$”. You will be led to a wrong conclusion if you don’t account for the fact that the price series is highly autocorrelated.
null
CC BY-SA 4.0
null
2023-05-05T09:00:42.420
2023-05-06T07:55:50.257
2023-05-06T07:55:50.257
31457
31457
null
75447
1
null
null
0
27
I was wondering why `RangeAccrualFloatersCoupon` is not accessible through SWIG QuantLib. I am currently using QuantLib python. Can anyone help ? Regards.
RangeAccrualFloaterCoupon not visible Quantlib Swig
CC BY-SA 4.0
null
2023-05-05T09:32:58.813
2023-05-05T14:02:51.323
2023-05-05T14:00:44.620
308
36003
[ "quantlib" ]
75448
2
null
3901
1
null
The humor aspect alluded to by Matt Wolf and user2303 certainly makes sense: in that vein, since the Riemann hypothesis is one of the unsolved "Millennium" math problems for which there's a monetary award (apparently of USD 1mm) I can certainly see a HF manager's angle on it. Though why he chose that particular problem rather than, say, the Hodge Conjecture, is elusive - perhaps he has an affinity for number theory over geometry!
null
CC BY-SA 4.0
null
2023-05-05T11:36:08.303
2023-05-05T11:36:08.303
null
null
35980
null
75449
2
null
75447
3
null
Not all of QuantLib is exported through SWIG—the process is only semi automated. You can [open an issue on GitHub](https://github.com/lballabio/QuantLib-SWIG) to request it to be exported.
null
CC BY-SA 4.0
null
2023-05-05T14:02:51.323
2023-05-05T14:02:51.323
null
null
308
null
75450
1
null
null
0
29
Are there errata in the Brigos's text of Interest Rate Models in chapter 16 when it is defined the YYIIS payoff? In formula (16.3) is defined Party A's payoff as: \begin{align} \\ N\psi_i\left[\frac{I\left(T_i\right)}{I\left(T_{i-1}\right)}-1\right] \\ \end{align} Where $\psi_i$ is the floating-leg year fraction for the interval $\left[T_{i-1},T_i\right]$ I think that CPI rentability is not annualized so we do not need $\psi_{i}$ factor in order to calculate the period rentability in terms of an annual rentability. Isn't it? I am not sure because these possible errata are in the following pages of the chapter... Thanks in advance
YYIIS Inflation swap chapter 16 of Brigo's text
CC BY-SA 4.0
null
2023-05-05T14:29:07.880
2023-05-05T15:33:53.520
2023-05-05T15:01:15.177
26805
26805
[ "swaps", "inflation", "payoff" ]
75451
1
null
null
1
155
If the discounted price of any asset is a martingale under risk neutral measure, why is $E^Q[e^{-rT} (S_T-K)_+ | F_t]$, not merely $e^{-rt} (S_t-K)_+$? This is something I wanted to clarify, since that's the definition of a martingale. Instead we use the lognormal distribution of the stock price and solve the expectation completely to get the black Scholes call price.
Discounted price of an option
CC BY-SA 4.0
null
2023-05-05T14:35:55.067
2023-05-07T12:42:30.913
2023-05-07T12:42:30.913
47484
67298
[ "options", "risk-neutral-measure", "pricing", "martingale" ]
75452
2
null
75450
0
null
One possible explanation I see is that the natural periodicity of this product is annual and that the fraction of a year is practically 1, so multiplying it by the fraction of a year is motivated by the day count conventions agreements between party A and party B.
null
CC BY-SA 4.0
null
2023-05-05T15:33:53.520
2023-05-05T15:33:53.520
null
null
26805
null
75453
2
null
75407
2
null
Delta and Gamma measure the sensitivity of option prices to only one variable - the underlying price. Most option models include multiple other variables, the most significant of which is volatility. Since volatility in option models is a forward-looking measure and can't be directly observed, it is implied from option prices, and often acts as a "market sentiment" variable. Options can be very sensitive to volatility, so ignoring that as a driver of option prices is a futile endeavor. Even adding volatility will not give you a complete picture, as there are other drivers of option prices like interest rates and time, and there are often correlations between variables that could impact PnL attribution (e.g. depending on hoe volatility is modeled, volatility can be sensitive to underlying price). I have developed PnL attribution reports for option traders, and there are always at least 5 attribution columns. Delta and Vega are typically the largest two factors, with gamma and other sensitivities being less important (except when there are large market moves). Even then, there are be significant unallocated changes in certain circumstances. It's like trying to compute how long it takes to drive from point A to point B by only looking at speed limits, without considering traffic, construction, etc.
null
CC BY-SA 4.0
null
2023-05-05T15:59:16.737
2023-05-05T15:59:16.737
null
null
20213
null
75454
2
null
75451
6
null
The process $Y_t:=(S_t-K)^+$ cannot be the price of a traded asset because of Jensen's inequality. Instead, it is the price of the option which is a martingale. In the Black-Scholes model, the primitive market model has only two assets: the stock with price $S_t$ and the money market account (MMA) with price $B_t:=e^{rt}$. Within this market, Black and Scholes prove that it is possible to replicate a European (call) option with payoff $(S_T-K)^+$ at some future expiry $T>t$. The value $V$ of this claim is given by its risk-neutral expectation: $$V_t=B_tE^{\mathbb{Q}}\left(\left.\frac{(S_T-K)^+}{B_T}\right|\mathscr{F}_t\right)\tag{1}$$ where $B_t/B_T=e^{-r(T-t)}$. Given the option can be replicated, it can be viewed as an asset. One can then consider an "augmented" market model with the stock, the MMA and the option. Per risk-neutral theory, it is the discounted price of an asset which is a martingale, that is if $P$ is the price of an asset then the process $$\frac{P_t}{B_t}$$ is a martingale. The price of the option is $V$ therefore letting $s<t$: \begin{align} B_sE^\mathbb{Q}\left(\left.\frac{V_t}{B_t}\right|\mathscr{F}_s\right) &=B_sE^\mathbb{Q}\left(\left.\frac{B_tE^{\mathbb{Q}}\left(\left.\frac{(S_T-K)^+}{B_T}\right|\mathscr{F}_t\right)}{B_t}\right|\mathscr{F}_s\right) \\ &=B_sE^\mathbb{Q}\left(\left.E^{\mathbb{Q}}\left(\left.\frac{(S_T-K)^+}{B_T}\right|\mathscr{F}_t\right)\right|\mathscr{F}_s\right) \\[7pt] &\overbrace{=}^{\text{LIE}}B_sE^\mathbb{Q}\left(\left.\frac{(S_T-K)^+}{B_T}\right|\mathscr{F}_s\right) \\[3pt] &\overbrace{=}^{\text{(1)}}V_s \end{align} where we have used the [Law of Iterated Expectations](https://en.wikipedia.org/wiki/Law_of_total_expectation) (LIE) and the definition $(1)$. Dividing by $B_s$: \begin{align} E^\mathbb{Q}\left(\left.\frac{V_t}{B_t}\right|\mathscr{F}_s\right) &=\frac{V_s}{B_s}, \end{align} hence the discounted price of the option is indeed a martingale.
null
CC BY-SA 4.0
null
2023-05-05T16:20:06.887
2023-05-06T07:31:44.300
2023-05-06T07:31:44.300
20454
20454
null
75455
2
null
16430
0
null
I know the question is old, but it can still be of interest. #### Replines vs loan-by-loan modelling One way to reduce the computation time is to create so-called replines to aggregate your portfolio into fewer, representative loans. E.g. |item |balance |rate |maturity | |----|-------|----|--------| |loan 1 |100 |4.00% |6 | |loan 2 |300 |5.00% |10 | |repline (weighted avg) |400 |4.75% |9 | You can do this with clustering algorithms like K-Means, easily available for both R and Python. Remember to scale your data. #### Loan-by-loan modelling Even after creating replines, you will still need to amortise your loans. This is something which can be done relatively easily in R, Python or Julia. A few notes: - Python's implementation of the pmt function (moved from numpy to numpy-financial) is very slow. If you rewrite your own, and set it up for scalar values only, it will be much faster. There are discussions on the github for numpy-financial - This kind of code lends itself to parallelisation quite well; e.g. you have 2,000 loans, you divide them into chunks of 500 and send each to a separate processor
null
CC BY-SA 4.0
null
2023-05-05T23:07:52.153
2023-05-05T23:07:52.153
null
null
40827
null
75457
1
75459
null
2
201
$$\frac{dX_t}{X_t}=\alpha\frac{dS_t}{S_t}+(1-\alpha)\frac{dS^0_t}{S^0_t}$$ where $\alpha$ is proportion of the investment in the risky asset $S_t$ and $S^0_t$ is the risk-free asset. $S_t$ follows a geometric Brownian motion, $$\begin{aligned} \frac{dS_t}{S_t} &= \mu{dt} + \sigma{dW_t} \\ \frac{dS^0_t}{S^0_t} &= r dt \end{aligned}$$ Substituting the equation, we get $$\frac{dX_t}{X_t}=(\alpha\mu+(1-\alpha)r)dt+\alpha\sigma{dW_t}$$ Solving the following SDE yields $$X_t = X_0 \exp\left(\left(\alpha\mu + (1-\alpha) r -\frac{(\alpha\mu)^2}{2}\right)t+\sigma{W_t}\right)$$ So we have to maximize $\alpha\mu+(1-\alpha)r-\frac{(\alpha\mu)^2}{2}$. Differentiating the above expression $\mu-r-\alpha\sigma^2$, so $\alpha=\frac{\mu-r}{\sigma^2}$. I think there might be an error in my derivation. I looked at this part and felt that something was off. $$X_t = X_0 \exp\left(\left(\alpha\mu+(1-\alpha)r-\frac{(\alpha\mu)^2}{2}\right)t+\sigma{W_t}\right)$$ Did I derive the Kelly formula correctly?
Did I derive the Kelly criterion correctly?
CC BY-SA 4.0
null
2023-05-06T04:12:55.947
2023-05-06T19:24:16.893
2023-05-06T19:24:16.893
20795
67303
[ "stochastic-processes", "stochastic-calculus", "kelly-criterion" ]
75458
1
null
null
3
84
Very simplistically, ERISA rules require corporate pension plans to use market rates to discount their liabilities. If interest rates go up, the value of their pension liabilities goes down. Since asset values are also marked to market using market rates, their assets have similar treatment and therefore it is self hedging (at least for the fixed income allocation.) Public (or government) Pension Plans on the other hand do not use market interest rates to value their liabilities. They use a target pension return, which is rarely adjusted and often do not reflect current market interest rates. As such, the value of their liabilities remains constant (not taking into account growth due to salaries etc.) However, their assets are marked to market in that they are carried at the current market value. As such, this is not self hedging. Wouldn't it make more sense for Public Pension Plans to have some type of tail risk hedge, or option like strategy, given the asymmetric impact of interest rates on their pension funded status?
Tail Risk Hedging for Public Pension Plan
CC BY-SA 4.0
null
2023-05-06T04:39:05.247
2023-05-06T04:39:05.247
null
null
31014
[ "options", "fixed-income", "portfolio-optimization", "asset-allocation" ]
75459
2
null
75457
2
null
Might be a typo but you dropped the $\alpha$ on the noise term after solving the SDE: in $\exp(...)$ you should have $\alpha \sigma W_t$ instead of $\sigma W_t$. For deriving the Kelly criterion, it won't matter since we will take the mean and this will vanish (see below). But in simulations this is important to get right, since your investment fraction $\alpha$ will scale the volatility $\sigma$ you're getting from the asset $S$. Let $g(\alpha) = r+(\mu-r)\alpha-\frac12 \sigma^2 \alpha^2$. This is just your expression, with some rearrangement. Your solution $X_t$ can now be written (with the above note in mind) as $$X_t = X_0 \exp(g(\alpha)t+\sigma \alpha W_t).$$ The Kelly criterion says to maximize the expected log utility: $$f(\alpha,t,x) = \mathbb{E}(\log X_t|X_0=x),$$ Taking logarithms gives $$\log X_t = \log X_0 +g(\alpha) t +\sigma \alpha W_t$$ Taking the expectation conditional on $X_0=x$, gives $$f(\alpha,t,x)=\log x +g(\alpha) t+0,$$ Differentiating this with respect to $\alpha$ gives $$\frac{\partial f}{\partial \alpha} = g'(\alpha)t$$ which is equal to zero if and only if $g'(\alpha)=0$ (we assume $t>0$). Computing $g'(\alpha)$ from the definition and setting it equal to zero means we have to solve $$g'(\alpha) = \mu-r-\frac12 \sigma^2 \alpha=0,$$ which gives the solution $\alpha^* = (\mu-r)/\sigma^2$ as desired. Your derivation is essentially the same but omits some of reasoning that would justify the steps (1. taking logs allows us to focus on the argument to $\exp$, 2. taking means makes the noise term vanish, so we just have to maximize the drift part).
null
CC BY-SA 4.0
null
2023-05-06T05:14:14.203
2023-05-06T05:14:14.203
null
null
34134
null
75460
1
75462
null
1
184
It might be a very simple question but for some reason I’m a bit confused. Let’s say we enter a long SOFR vs fix interest swap at par. Say 5 year swap with annual coupons (the rfr is daily compounded and paid at each annual reset). The swap rate will be the average of forward rates weighted by the DFs (5 periods so 5 sets of DF*forward rate) Zero coupon curve (and swap curve) is upward sloping. At initiation the NPV of the swap is zero. If we assume the curve stays what it is then we would lose in MtM due to the rolldown as in 1 year for example our now 4y swap would be MtM at a lower swap rate than our fixed coupon rate. Now if assume the forward rates are actually realised, the final NPV of the swap should end up being zero by definition. But what does ‘forward rates realised’ actually mean? Does that mean a swap rate constant as we go through time? Spot 5y swap rate = 4y swap rate in 1y = 3y swap rate in 2y, etc.? If that’s the case, for example in 1 year, our now 4y swap rate would still be the same as our fixed coupon rate -> no mtm impact but if we look at the past cash flows and the first reset, we most likely paid more than we received. So the NPV (past cash flows + future mtm) should be negative at this time. How would it go up to 0? It seems to me the swap rate of the smaller time-to-expiry swaps should go progressively up to compsentate.
MtM of interest rate swap if forward rates are realised
CC BY-SA 4.0
null
2023-05-06T12:58:06.023
2023-05-23T18:01:28.233
2023-05-06T13:06:45.893
57427
57427
[ "fixed-income", "interest-rate-swap" ]
75461
2
null
75445
4
null
I am not sure why you want to use detrending as part of a backtest. The only book I know that advocates such an approach is D. Aronson's Evidence based technical analysis, Wiley, 2006. The valid point he makes is that if you test a stock market timing strategy that goes into and out of the market at random times, it will make money over a long period simply because the stock market rises over the long run. But that does not make it an attractive strategy. He advocates fitting a trend line to the S&P 500 prices over the backtest period (by connecting the starting and ending prices) and simulating buys and sells at an adjusted price equal to the actual price minus the trend line. For signal generation you would use the real prices. If you buy and sell at random using these adjusted prices you will make a P&L of zero, correctly showing the strategy is worthless. Some people call these prices the "drift ajusted prices" (it is a term I like more than 'detrended' which has many meanings). The approach I prefer is to backtest 2 strategies using the same software and (unadjusted) data, the strategy you are interested and a Buy and Hold strategy that is fully invested in the S&P 500 at all times. Then you compare the stats for the 2 strategies and try to see if your strategy has an advantage over BH either in terms of excess returns or lower volatility, lower drawdown, etc. I think it is easier and cleaner to compare two realistic strategies rather than looking at a strategy that trades at made up prices. But that is just my opinion.
null
CC BY-SA 4.0
null
2023-05-06T13:26:49.643
2023-05-06T13:26:49.643
null
null
16148
null
75462
2
null
75460
2
null
Forward rates realized means if today the 1y forward 4y swap rate is $X$ then in one year the 4y spot starting swap rate will be $X$. In your example, let's say at inception the 5y spot swap rate is $Y$ and the 1y fwd 4y is $X$. Let's also set $Z$ to be the spot 4y swap rate today (so $Y>Z$ for an upward sloping curve). Now you enter a pay fixed 5y swap today at $Y$ (so 0 MtM). For a non-flat curve $X \neq Y$ (in fact for an upward sloping curve $ X>Y>Z$). So, if forward rates are realized, after one year you will be paying $Y$ on a 4y spot swap while the market is $X$, so your MtM on the residual swap will be $X-Y>0$. It's important to distinguish between "nothing happens" and "forwards are realized": The former means the curve stays exactly the same as it is today (so in a year the 4y spot swap rate is still $Z$ so your MtM is $Z-Y<0$). The latter means the curve changes to exactly what it was expected to as implied by today's rates. If we let $U$ (for unknown) denote what the 4y swap rate ultimately ends up being, then $Y-X$ and $U-Y$ are the carry and roll, respectively. In this sense, carry is a known quantity and roll is the variable (more on this below). Crucially, the forward value of the residual swap over a given horizon = -carry over that horizon. The essential point is this: if you enter a pay fixed 5y swap today, hold the trade for a year, and then unwind it at market, how much money do you expect to make? The answer is 0. Because the expected value of $U$, $E(U)=X$: when you enter the swap, you lock in negative carry of $−<0$. When you unwind it at the prevailing 4y rate of $$ you make $−>0$. On the other hand if, in a year's time, you unwound the swap and the 4y rate was unchanged i.e. $U=$, then you pocket/lose $−$ i.e. total carry+roll generated by your trade (mtm of your swap aka rolldown of $−$ + locked-in carry of $−$). Note that this the same as paying 1y4y at $$ and closing it out at $$ in a year with just a rolldown. This is the point Attack68 makes below (i.e. carry is a superfluous term in this context). Finally, receiver carry/roll trades generate a profit if "nothing happens" (i.e. $U=Z$). But this is a subtle point because something is happening, namely the fwds are not being realized! So the market is "doing something unexpected" over that year. Any outcome other than the forwards being realized is equivalent in the sense that it indicates presence of volatility. That's a definition of volatility: deviation from expected value. So people looking to profit from putting on receiver carry/roll trades are effectively going long vol.
null
CC BY-SA 4.0
null
2023-05-06T14:28:27.027
2023-05-23T18:01:28.233
2023-05-23T18:01:28.233
35980
35980
null
75463
1
null
null
1
129
I use an ARMA-GARCH model for univariate distributions and a copula model for the multivariate distribution. For the value at risk (VaR) estimation, I do a Monte Carlo simulation. I'm looking for some general reasons why VaR estimations over- or underestimate the VaR in the sense of [this answer](https://quant.stackexchange.com/a/27877/67311). And what would be some source to look that up?
VaR backtesting. Reasons for over- and underestimation of value at risk estimates?
CC BY-SA 4.0
null
2023-05-06T16:24:16.523
2023-05-26T17:56:06.867
2023-05-09T11:26:21.910
19645
67311
[ "risk-management", "value-at-risk", "backtesting", "reference-request" ]
75464
2
null
52963
0
null
You have a good idea... I like to take $1 and divide by current stock price to find what a dollar increase in the underlying would represent "percent wise". Then save that as (ELEMENT 1). Then, take the option delta of that stock and divide it by the cost of the option to find what "percent wise" increase would occur in the value of that same option that I "bought" if a $1 increase would occur in that same stock... Then save that as (ELEMENT 2) Then find out what "ELEMENT 1/ELEMENT 2" is, such might be 1.5%/15% or ... or 7%/65%... or 15%/135%... then basically... lowering the first % to (1%) and then finding out what the other proportional percent would be (what the % increase of the value of the option would be per 1% increase in the stock price) I usually find its about 10% option value increase for every 1% increase in the value of the stock (for call options), but it can widely vary as you might imagine.
null
CC BY-SA 4.0
null
2023-05-07T01:46:54.920
2023-05-07T01:53:36.130
2023-05-07T01:53:36.130
67315
67315
null
75465
1
null
null
1
97
Why do we need an ex-dividend date? What is the problem with the ex-dividend date being the same as the payment date? Why are they separate? What problem does having a separate ex-dividend date solve? For example, at the moment - a company announces a \$1 dividend on 12 May with ex-div date of 12 June and a payment date of 12 July. The stock price goes down by \$1 on the ex-dividend date of 12 June. The money goes out of the company to investors on 12 July. Between 12 June and 12 July, the stock is cheaper by $1 because it has been discounted (ex-div) but the money is still in the company's bank accounts. What problem does this extra (ex-div) date solve? I can only see it introducing a risk or a mismatch because a company is entitled to cancel the dividend distribution past the ex-dividend date and not pay it out. I agree it is rare but it happens and is legal and possible. The only date where we are 100% sure that a dividend is in fact paid out or not (and the stock price should go down by \$1) is on the payment date when the money goes from of the company's account into the shareholders accounts.
Why do we need an ex-dividend date?
CC BY-SA 4.0
null
2023-05-07T12:40:13.767
2023-05-07T21:00:32.050
null
null
17776
[ "equities", "valuation", "dividends", "accounting" ]
75466
2
null
75463
2
null
I think there could be a few theoretical reasons for it. - VaR is distribution dependent. Even if bootstrapping, it is necessary that yhe underlying distribution satisfy second order conditions for convergence. - Also depends on the type of VaR being used. CVaR and var for instance capture two different things. CVaR is known to outperform VaR.
null
CC BY-SA 4.0
null
2023-05-07T12:49:24.290
2023-05-08T06:17:48.700
2023-05-08T06:17:48.700
19645
52221
null
75467
2
null
75435
0
null
It is an interesting observation and a bit of a stretch. However note that the KL loss function is merely a divergence function used in many applications as modelers need it. In the case of the MEMM the probabilty measure found is the closest one to the historical measure that is risk neutral. The KLD has nothing to do with risk neutrality intrinsically.
null
CC BY-SA 4.0
null
2023-05-07T13:07:36.837
2023-05-07T13:07:36.837
null
null
52221
null
75468
2
null
73963
0
null
There is no Mathematical jargon for such terms. However in particular applications a good understanding of what those terms do may warrant applications specific names such as the one mentioned by the previous poster in bond market. First order = linear approximation of whatever function or price you have. The rate of change being the Constant coefficient Second order term = quadratic correction. Constant coefficient being the rate of change of the previous Constant coefficient 10 th order term = 10th order polynomial approximation. Constant coefficient being the rate of change of the previous (9th order) coefficient
null
CC BY-SA 4.0
null
2023-05-07T13:21:24.780
2023-05-07T13:21:24.780
null
null
52221
null
75471
2
null
53457
1
null
This question about reversibility of time attracted a lot of attention of econophysicists. Laws of physics guarantee no time reversion in real life (because of entropic principle): is it the same on financial markets? Somehow, the returns of stocks should reflect the growth of economy and the risk of projects the companies are betting on. All that taking its ground in the physical world, one could expect no reversion of time on financial markets. One empirical evidence is that recent trends are predicting bursts of volatility only in one direction: "future trends are not following volatile periods" with the same intensity. To test that - build a very simple short term trend indicator ${\cal T}(t,t-\delta t)$ - estimate the short term volatility $\sigma(t,t-\delta t)$ - add any indicators/analytics you like during the same interval $[t,t-\delta t]$. So that you can build a feature space $X_t$ by collecting in a vector all these features. Then for any stopping time $\tau$ you need to store the volatility from $\tau+1$ to $\tau+\delta t$ in a $Y_\tau$ variable. Use your favourite model (linear regression, random forest, etc) to predict $Y$ as a function of $X$. Keep the residuals and the $R^2$ of this regression. Do the same on the time-reversed time series: you will see that the $R^2$ should be worst.
null
CC BY-SA 4.0
null
2023-05-07T16:17:07.103
2023-05-07T16:17:07.103
null
null
2299
null
75472
2
null
75465
4
null
I think you are really concerned about the record date. The ex-dividend date itself is set by the exchange. See for example [Nasdaq](https://www.nasdaq.com/articles/ex-dividend-date-vs.-record-date:-whats-the-difference). > The firm issuing the stock manages the declaration date, record date and payout date, but the exchange sets the ex-dividend date... ...the New York Stock Exchange (NYSE) would set the ex-dividend date for March 13 to allow time for trade settlement. The NYSE sets most ex-dividend dates and other exchanges follow in lockstep. This is similar to other settlement dates that exist with most transactions. Generally speaking I think it's extremely unlikely a company will cancel a dividend. For final dividends, this is anyhow pretty much impossible.See for example [ffslaw](https://ffslaw.com/articles/have-directors-improperly-refused-to-declare-a-dividend/#:%7E:text=Once%20the%20board%20of%20directors,consent%20of%20each%20such%20shareholder.): > Once the board of directors has lawfully declared a dividend for each shareholder entitled to receive it, the board may not revoke it or withhold dividend distribution without the consent of each such shareholder. Furthermore, > Setting the payment date rests within the sound discretion of the board of directors. Normally, it is set within 30-60 days following the “record date,” to allow a reasonable period for administrative preparation to make dividend distributions. Therefore, the reason for the time gap is to allow some admin time. It's extremely unlikely any company will cancel already declared dividends unless they get into severe trouble or regulators push them towards it, as was the case with HSBC for example, see [info.gov.hk](https://www.info.gov.hk/gia/general/202005/13/P2020051200565.htm).
null
CC BY-SA 4.0
null
2023-05-07T20:45:08.247
2023-05-07T20:45:08.247
null
null
54838
null
75473
2
null
75465
-1
null
The ex-dividend date is the date on or after which a buyer of a stock is not entitled to receive the next dividend payment. The purpose of the ex-dividend date is to ensure that the buyers and sellers of a stock share the responsibility of paying taxes on any dividends paid by the company. If the ex-dividend date were the same as the payment date, buyers would be entitled to receive the dividend even if they held the stock for only a short period of time before selling it. This could lead to an unfair situation where some investors would receive the benefit of the dividend without bearing the tax burden associated with it, while others would have to pay taxes without receiving the benefit. By setting an ex-dividend date, the company ensures that only those shareholders who owned the stock before that date are entitled to the dividend. This allows for a fair distribution of the tax burden and prevents investors from engaging in short-term trades to take advantage of dividend payouts. Regarding the risk of the company cancelling the dividend distribution after the ex-dividend date, it is true that this is possible. However, this is generally a rare occurrence and is typically only done in exceptional circumstances, such as a significant decline in the company's financial position. In general, companies try to avoid cancelling dividends once they have been announced, as it can damage their reputation with investors.
null
CC BY-SA 4.0
null
2023-05-07T21:00:32.050
2023-05-07T21:00:32.050
null
null
67326
null
75474
2
null
39241
0
null
You can set the settlementDays parameter explicitly by passing it as an argument to the SwapRateHelper constructor. To do so, you can modify your code like this: s_helpers = [ SwapRateHelper(rate/100.0, tenor, l_calendar, Semiannual, l_pmt_conv, Thirty360(), USDLibor(Period(3, Months)), settlementDays=0) ``` for tenor, rate in [(Period(1,Years), 2.395), (Period(2,Years), 2.575), (Period(3,Years), 2.651), (Period(5,Years), 2.704), (Period(7,Years), 2.734), (Period(10,Years), 2.779), (Period(30,Years), 2.822)] ] ```
null
CC BY-SA 4.0
null
2023-05-07T21:05:16.627
2023-05-07T21:05:16.627
null
null
67326
null
75475
2
null
8099
0
null
Yes, you can access a list of CIK (Central Index Key) codes for all registered companies with the SEC through the EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system. Here's how you can do it: Go to the SEC's EDGAR company filings website: [https://www.sec.gov/edgar/searchedgar/companysearch.html](https://www.sec.gov/edgar/searchedgar/companysearch.html) Scroll down to the "Company Name" section and click on "CIK Lookup" link. You will be redirected to the "CIK Lookup" page where you can download a complete list of CIK codes in a zip file format. Click on the "CIK Lookup Data" link to download the file. Extract the files from the zip file and open the "cik.txt" file in a text editor. This file contains a list of all CIK codes for all registered companies with the SEC. Note that the list of CIK codes may be large and may take some time to download and open. Also, note that the list may not include newly registered companies or companies that have recently deregistered or gone bankrupt.
null
CC BY-SA 4.0
null
2023-05-07T21:07:50.167
2023-05-07T21:07:50.167
null
null
67326
null
75477
1
null
null
0
40
I'm reading Option Pricing: A Simplified Approach and have a question. Assume the binomial tree model for the stock. So - $n$ discrete time periods - $S$ is stock - $C$ is call - $K$ is strike - $u$ is upward move - $d$ is downward move - $r$ is total return $1+R$ where $R$ is the interest rate - no arb: $d < r < u$ - $q$ is probability of upward move - $p = (r-d) / (u-d)$ is risk-neutral probability of upward move Fine so far. But then Cox defines $p^{\prime}$ as $(u/r) p$. And I don't understand what this parameter represents. Concretely, the binomial options pricing formula for a call $C$ is $$ C = S \Phi(a; n, p^{\prime}) - K r^{-n} \Phi(a; n, p) $$ where $a$ is the min moves for the call to be in-the-money and $\Phi$ is the complementary binomial distribution function, i.e. 1 minus the CDF. I can't make sense of $p^{\prime}$ though. Why does this adjustment factor fall out of the model? What does it represent?
Understanding the adjustment $(u/r) p$ in the binomial options pricing formula
CC BY-SA 4.0
null
2023-05-07T23:33:01.527
2023-05-07T23:33:01.527
null
null
56943
[ "options" ]
75478
1
null
null
3
44
Suppose I have two Geometric Brownian motions and a bank account: $$dB_t=rB_tdt$$ $$ dS=S(\alpha dt + \sigma dW_t) $$ $$ dY = Y(\beta dt + \delta dV_t) $$ Where $dW_t$ and $dV_t$ are independent Wiener processes. Now suppose that I have a martingale measure $Q$ such that $d(S_t/B_t)$ is a martingale. Then we have: $$dS = S(rdt+\sigma dW_t^Q)$$ Is it possible write the dynamics of $Y_t$ under $Q$? Suppose also I am interested in pricing $Z_t=S_tY_t$. Would I still use the measure $Q$ to price $Z$ or am I using a separate measure $\tilde{Q}$ such that $d(Z_t/B_t)$ is a martingale? Thanks!
Dynamics of independent Geometric Brownian Motions under risk-neutral measure Q
CC BY-SA 4.0
null
2023-05-08T03:48:39.383
2023-05-08T03:49:07.060
2023-05-08T03:49:07.060
67328
67328
[ "option-pricing", "risk-neutral-measure", "geometric-brownian" ]
75479
1
null
null
3
110
Should I back-test in a single (original) price series and bootstrap the strategy returns to get statistics of interest? Or should I create bootstrapped price series using bootstrapped returns from the original price series and run the strategy on those? The latter one requires significantly more computational power.
Proper way to backtest strategy using bootstrap method
CC BY-SA 4.0
null
2023-05-08T06:11:39.673
2023-05-09T15:15:23.200
null
null
41582
[ "quant-trading-strategies", "finance-mathematics", "backtesting", "mathematics", "bootstrap" ]
75482
1
null
null
3
107
I need to build a Liquidity Risk report at my intern job. There, I consider an MDTV90 (Median Daily Traded Value for 90 days, a measure of liquidity) for each asset we trade to find how many days we spend to sell it (Days to Sell). It is quite easy for equities because historical data is widely known. Although, when talking about options, we have very sparse volume data (they follow kind of a power rule: most of the days have small trade volume, and some days have big trade volume), so the resulting MDTV is not a great value to represent Daily Liquidity Then, I would like to know how I can compute an MDTV liquidity alternative for Options or fetch uniform volume data for specific options?
Methods to estimate Options volume
CC BY-SA 4.0
null
2023-05-08T17:43:36.600
2023-05-10T14:16:39.653
2023-05-10T12:52:26.930
67342
67342
[ "options", "risk-management", "volume" ]
75483
2
null
75479
1
null
Straight bootstrap of the returns of the strategy would result in inconclusive evidence about the ability of the strategy to generate added value in terms of some "abnormal" returns. Bootstrapping original time series would undermine ability of the strategy to generate returns even if the strategy is reasonable. You could instead use solution in the style of Cowles, described above. For example, something like that: - Model distribution of number of bars between two signals (buy and sell) of the actual strategy. - Generate N buy/sell pairs from the distribution (1) and apply it to random points in time series. - Finally, calculate equity curve and all statistics like profit factor, average drawdown, Ulcer and Sharpe ratio, etc. - Repeat (2) and (3) many times, say B = 10 000+ times. - Compare actual metrics and bootstrapped ones.
null
CC BY-SA 4.0
null
2023-05-08T19:28:58.730
2023-05-09T15:15:23.200
2023-05-09T15:15:23.200
6535
6535
null
75484
1
null
null
8
432
I can't find the below statement anywhere (rearrangement of Black-Scholes formula) : $C(0, S) = e^{-rT}N_2[F-K] + [N_1-N_2]S$ $F$ being the forward, it reads as a straightforward decomposition to intrinsic value (1st term) and extrinsic/time value (2nd term). This may answer the famous question what is the difference between $Nd_1$ and $Nd_2$ (mathematical difference and the difference in meaning too): The difference is the time value of the option. Edit: Just wanna add that for small log-normal volatility $\sigma\sqrt{T} < 1 $ : $$N_1 - N_2 = N(d_1) -N(d_1 - \sigma\sqrt{T}) \approx \sigma\sqrt{T}n_1$$ Hence, as $\mathcal{Vega} = S\sqrt{T}n_1$ the "speculative" time value is $$ [N_1 - N_2]S = \sigma\mathcal{Vega} = \sigma\sqrt{T}n_1S $$ And: $$N_2 \approx N_1 - \frac{\sigma\mathcal{Vega}}{S} = \Delta - \frac{\sigma\mathcal{Vega}}{S} $$ Thus for small $\sigma\sqrt{T}$ : $$C = \left[\Delta - \frac{\sigma\mathcal{Vega}}{S} \right] [F - K] + \sigma\mathcal{Vega}$$ The "intrinsic value" of the 1st term is not negative for OTM as mentioned in the comment (bc delta $\approx$ 0 and vega > 0). ATM the 1st term (intrinsic value) is zero so the price is linear in volatility and is purely speculative (think of vega as a proxy for the bid-ask spread). Also, the ATM vega is maximal $\mathcal{Vega}_{max} = 0.4S\sqrt{T}$ which makes the ATM price equals to the maximal time value of the option, both equal to $0.4S\sigma\sqrt{T}$. The difference $N_1 - N_2$, the time value and the vega (vega cash) normalized by S are three sides of the same coin.
Option time value is Nd1-Nd2
CC BY-SA 4.0
null
2023-05-09T09:02:03.510
2023-05-11T19:27:02.683
2023-05-11T19:27:02.683
60070
60070
[ "black-scholes", "european-options" ]
75485
2
null
75484
10
null
That's nice. Starting from $$C = e^{-r T}N_2 (F-K) + (N_1 - N_2) S$$ we can substitute $F= e^{r T}S$ (no dividend case) so we get $$C = e^{-r T}N_2(e^{r T} S-K) + (N_1 - N_2) S =$$ $$= N_1 S -e^{-r T}N_2 K $$ which is just the Black Scholes 1971 formula. The first term $e^{-r T}N_2 (F-K)$ is a new definition of "intrinsic value", different from the traditional one, you could call it the "forward intrinsic value" or something like that. It is the present value of the forward minus the strike, times the probability $N_2$ (roughy speaking the probability of exercise). It could be negative for an OTM call (weird). Then the second one $ (N_1 - N_2) S$ is the corresponding form of time value which again deserves a new name (the "speculative value"?).
null
CC BY-SA 4.0
null
2023-05-09T11:44:09.000
2023-05-09T12:01:31.200
2023-05-09T12:01:31.200
16148
16148
null
75486
1
null
null
5
227
There is a folklore white noise hypothesis related to (and equivalent to some forms of) the efficient market hypothesis in finance -see references below. But are there some asset pairs whose return time series (or perhaps some "natural" transforms of those time series) are approximately noises of another color than white ? -I ask as a nonspecialist, obviously. Thank you. Bonus question: Does anyone know how to play/hear a (financial) time series recorded as a pandas series, dataframe, python list, numpy array, csv/txt file,... ? [https://www.jstor.org/stable/2326311](https://www.jstor.org/stable/2326311) [https://www.lasu.edu.ng/publications/management_sciences/james_kehinde_ja_10.pdf](https://www.lasu.edu.ng/publications/management_sciences/james_kehinde_ja_10.pdf) [http://www2.kobe-u.ac.jp/~motegi/WEB_max_corr_empirics_EJ_revise1_v12.pdf](http://www2.kobe-u.ac.jp/%7Emotegi/WEB_max_corr_empirics_EJ_revise1_v12.pdf) [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8450754/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8450754/) [https://journals.sagepub.com/doi/pdf/10.1177/0256090919930203](https://journals.sagepub.com/doi/pdf/10.1177/0256090919930203) [http://www.ijhssnet.com/journals/Vol_2_No_22_Special_Issue_November_2012/23.pdf](http://www.ijhssnet.com/journals/Vol_2_No_22_Special_Issue_November_2012/23.pdf) [https://en.wikipedia.org/wiki/Colors_of_noise](https://en.wikipedia.org/wiki/Colors_of_noise) -On colors of noise
What color financial time series are there?
CC BY-SA 4.0
null
2023-05-09T15:21:31.133
2023-05-09T21:24:34.223
null
null
66856
[ "time-series", "statistical-finance", "market-efficiency" ]
75487
1
null
null
2
45
Dear Quant StackExchange I seek some intuition for how my portfolio behaves given constraints. In a universe of say 5 assets, I have a "target portfolio" with weights that are found from risk budgeting, $w_T$. But I also have a benchmark ($w_{BM}$) that I care about minimizing my tracking error volatility towards. That benchmark is invested in some assets (say asset 4 and 5) that I cannot invest in, which gives rise to a further constraint $w_i=0$ for $i=4,5$. So I'm interested in finding a portfolio of weights ($\hat{w}$) that is as close to the target portfolio $w_T$ as possible at the same time subject to the constraint that tracking error volatility towards the benchmark is below some threshold $\tau$. The other constraint is that weights in asset 4 and 5 in $\hat{w}$ is zero. I have set up the Lagrangian: $$L = (w-w_T)'\Sigma(w-w_T) + \lambda_1Aw + \lambda_2[(w-w_{BM})'\Sigma(w-w_{BM}) +\tau] $$ where $\Sigma$ is the covariance matrix and $A$ is a $5\times5$ matrix of zeros except for the 4th and 5th diagonal entries being 1. My intuition says that the resulting portfolio should lie somewhere between the target portfolio, $w_T$, and the "minimum tracking error portfolio" (that I get to be): $$w_{mte} = \underset{\text{s.t. }A'w=0}{\text{arg min }(w-w_{BM})'\Sigma(w-w_{BM})} = w_{BM}-\frac{\lambda_1}{2}\Sigma^{-1}A$$ But when solving it, I'm not able to show that the optimal portfolio is something like a complex convex combination of the two (think: $\hat{w} = \Gamma w_T + (1-\Gamma)w_{mte}$ where $\Gamma = f(\Sigma^{-1}, \lambda_1, \lambda_2)$)? I guess $\Gamma$ should simplify to the identity matrix when the second constraint is not binding, i.e. $\lambda_2=0$ thereby yielding the target portfolio. Any help is much appreciated! I'm I on the wrong track here, or is somebody able to derive it?
Optimal portfolio as combination of target and minimum tracking error portfolios?
CC BY-SA 4.0
null
2023-05-09T15:38:10.027
2023-05-09T15:38:10.027
null
null
42343
[ "portfolio-optimization", "risk-management", "optimization" ]
75488
1
null
null
2
91
It seems to me like historic backtesting is the best of bad options out there for me to test my systematic strategies - even ones that are more macro-level trend spotting. I can't test enough scenarios since history is constrained by types of events that have occurred. I want to be robust in my testing and test all possible scenarios that might happen. Wondering if anyone else feels the same way and whether there are any tools out there or whether I'd have to build my own (ex. Agent-Based Simulations).
Fatigue with Historic Backtesting - Alternatives?
CC BY-SA 4.0
null
2023-05-09T19:39:15.053
2023-05-10T15:43:54.657
null
null
67036
[ "backtesting", "simulations", "trading-systems", "multi-agent-simulations" ]
75490
2
null
75486
8
null
> Bonus question: Does anyone know how to play/hear a (financial) time series recorded as a pandas series, dataframe, python list, numpy array, csv/txt file,... ? This is kind of fun and has practical applications to quantitative finance. My partners and I have actually been experimenting with this as the basis for a model for a short while now and have experienced very interesting results. At first, I found it most straightforward to map my time series to piano key frequencies. I specifically made a dictionary of piano key frequencies out of just one octave on a piano. The octave consisted of seven white and five black (sharp) keys. Each key was calibrated in relation to the others, just like a piano would be tuned. I "tuned" it to middle C in this way: $$note frequency = base frequency * 2^\frac N{12} $$ Where the $base frequency$ is that of [middle C](https://en.wikipedia.org/wiki/C_(musical_note)) (261.63 Hz) and each $N$ is a note from C to B (C, c, D, d, E, F, f, G, g, A, a, B). This is known as an [equal temperament system](https://en.wikipedia.org/wiki/Equal_temperament). In Python, we can create a dictionary of frequencies like this: ``` def piano_notes(): ''' Returns a dictionary containing the frequencies of piano notes ''' base_freq = 261.63 octave = ['C', 'c', 'D', 'd', 'E', 'F', 'f', 'G', 'g', 'A', 'a', 'B'] note_freqs = {octave[i]: base_freq * 2**(i/12) for i in range(len(octave))} note_freqs[''] = 0.0 # pause / silent note return note_freqs ``` The output from `print(note_freqs)`: ``` {'C': 261.63, 'c': 277.18732937722245, 'D': 293.66974569918125, 'd': 311.1322574981619, 'E': 329.63314428399565, 'F': 349.2341510465061, 'f': 370.00069432367286, 'G': 392.0020805232462, 'g': 415.31173722644, 'A': 440.00745824565865, 'a': 466.1716632541139, 'B': 493.89167285382297, '': 0.0} ``` From here, you must decide how to transform every price or return in your time series into an integer from 0 - 11 and map them to their respective dictionary values. That step requires some creativity, and I'll leave that to you. Now that you have your time series mapped to piano note frequencies, to be able to listen to your time series, you need to convert your frequencies into something that can be played, i.e., you need to turn them into sound waves! A wave can be mathematically described as: $$g(f)=A * \sin(2\pi\ {ft})$$ where: $A$=amplitude, $f$=frequency, and $t$=time. That said, we need to have a function that generates a wave array with respect to time which is much easier than it sounds: ``` import numpy as np sample_rate = 44100 # Standard sample rate in digital audio (in Hertz, Hz) def waves(freq, duration=0.5): ''' Takes frequency, and time_duration as inputs and returns a numpy array of values at all points in time ''' amplitude = 4096 # tuning fork frequency t = np.linspace(0, duration, int(sample_rate * duration)) wave = amplitude * np.sin(2 * np.pi * freq * t) return wave ``` After turning your notes into playable waves, you concatenate them, save them locally, and play them. ``` import numpy as np from scipy.io.wavfile import write def song_data(music_notes): ''' concatenate all the waves ''' note_freqs = piano_notes() song = [waves(note_freqs[note]) for note in music_notes.split('-')] song = np.concatenate(song) return song ``` Here is an example of using the above functions to play "Mary Had A Little Lamb." The file will save in your working directory and can be played using a generic .wav player on just about any machine. ``` music_notes = 'E-D-C-D-E-E-E--D-D-D--E-E-E--E-D-C-D-E-E-E--E-D-D-E-D-C-' data = song_data(music_notes) write('mary-had-a-little-lamb.wav', samplerate, data.astype(np.int16)) ``` Practically speaking, the similarities between the math behind the music and other patterns in nature are extremely interesting. Our original idea has morphed into a full piano (88 keys) with seven octaves and all known chords being played. We have begun to incorporate other instruments recently as well. I'll leave it to you to determine whether or not the markets are, indeed, playing a song that you like--and can profit from!
null
CC BY-SA 4.0
null
2023-05-09T21:24:34.223
2023-05-09T21:24:34.223
null
null
26556
null
75491
1
null
null
2
29
I tried finding upper bounds for each component in terms of E_1 using the put call parity but couldn’t get the correct answer. [](https://i.stack.imgur.com/jpN3R.jpg)
Finding upper bound for portfolio made from European call / put options
CC BY-SA 4.0
null
2023-05-09T23:06:43.087
2023-05-09T23:06:43.087
null
null
67356
[ "options", "option-pricing", "no-arbitrage-theory", "put-call-parity" ]
75492
1
null
null
1
51
I'm reading Antonie Savine's fascinating book Modern Computational Finance AAD and Parallel Simulations. However, I got a bit confused while reading and couldn't make sense of how it works in his work. To be more specific, in the following paragraphs: > We developed, in the previous chapter, functionality to obtain the microbucket $\frac{\partial V_0}{\partial \sigma(S,t)}$ in constant time. We check-point this result into calibration to obtain $\frac{\partial V_0}{\partial \hat{\sigma}(K,t)}$ , what Dupire calls a superbucket. We are missing one piece of functionality: our IVS $\hat{\sigma(K,T)}$ is defined in derived IVS classes, from a set of parameters, which nature depends on the concrete IVS. For instance, the Merton IVS is parameterized with a continuous volatility, jump intensity, and the mean and standard deviation of jumps. The desired derivatives are not to the parameters of the concrete IVS, but to a discrete set of implied Black and Scholes market-implied volatilities, irrespective of how these volatilities are produced or interpolated. To achieve this result, we are going to use a neat technique that professional financial system developers typically apply in this situation: we are going to define a risk surface: $$s(K,T)$$ such that if we denote $\hat{\sigma}(K,T)$ the implied volatilities given by the concrete IVS, our calculations will not use these original implied volatilities, but implied volatilities shifted by the risk surface: $$\sum(K,T) = \hat{\sigma}(K,T) + s(K,T)$$ Further, we interpolate the risk surface s(K,T) from a discrete set of knots: $$s_{ij} = s(K_i, T_j)$$ that we call the risk view. All the knots are set to 0, so: $$\sum(K,T) = \hat{\sigma}(K,T)$$ so the results of all calculations remain evidently unchanged by shifting implied volatilities by zero, but in terms of risk, we get: $$\frac{\partial}{\sigma(K,T)} = \frac{\partial }{\partial s(K,T)}$$ The risk view does not affect the value, and its derivatives exactly correspond to derivatives to implied volatilities, irrespective of how these implied volatilities are computed. We compute sensitivities to implied volatilities as sensitivities to the risk view: $$\frac{\partial V_0}{\partial s_{ij}}$$ ... In my understanding, the risk surface added to the implied volatlity should be zero, but I don't see how this translates from model risk to market risk by adding the risk surface, can anyone provide any intuition behind this process?
From model vega matrix to market vega matrix
CC BY-SA 4.0
null
2023-05-10T01:17:34.840
2023-05-10T01:17:34.840
null
null
62759
[ "implied-volatility", "local-volatility" ]
75493
1
null
null
2
50
[](https://i.stack.imgur.com/c1H1v.png) In this derivation of Black's formula for puts, we have that $\mathbb{E}[e^X 1_{e^X \leq K/S_0}]$ somehow equals $S_0 e^{\mu + 0.5 \sigma^2} N$ (as above in the formula). I tried breaking apart the formula into $$\mathbb{E}[e^X 1_{e^X \leq K/S_0}]) = \mathbb{E}(e^X)\mathbb{E}(1_{e^X \leq K/S_0}) + \operatorname{Cov}(e^X,1_{e^X \leq K/S_0})$$ and ended up with $$S_0 e^{\mu + 0.5\sigma^2} N((\ln(K/S_0) - \mu)/\sigma) + \operatorname{Cov}(e^X, 1_{e^X \leq K/S_0})$$ by applying the MGF and standardizing the inside of the Normal and now I'm stuck on what to do next to simplify it into the form given in step 3. My question is how to calculate away the Covariance term to make it match last line of the circled formula.
Black's formula derivation: expectation of a indicator times a random variable
CC BY-SA 4.0
null
2023-05-10T02:59:53.800
2023-05-10T03:39:39.410
2023-05-10T03:39:39.410
67358
67358
[ "put", "indicator", "expected-value" ]
75494
1
null
null
1
46
The common stock of the bank is considered to be a part of the Tier 1 capital, but it is also subject to market risk. The [Investopedia](https://www.investopedia.com/terms/c/common-equity-tier-1-cet1.asp) definition distinguishes between Tier 1 and Tier 3 capital by the latter carrying market risk. Why is common stock of the bank considered to be a Tier 1 capital? Wouldn't during a liquidity event the value of the common stock of the bank also take a hit?
Market risk in Tier 1 capital of a bank
CC BY-SA 4.0
null
2023-05-10T09:36:18.597
2023-05-10T09:36:18.597
null
null
62328
[ "banking-regulations", "central-banking" ]
75495
5
null
null
0
null
null
CC BY-SA 4.0
null
2023-05-10T11:16:32.403
2023-05-10T11:16:32.403
2023-05-10T11:16:32.403
-1
-1
null
75496
4
null
null
0
null
The Vasicek model is a 1-factor short-rate model.
null
CC BY-SA 4.0
null
2023-05-10T11:16:32.403
2023-05-10T13:20:14.497
2023-05-10T13:20:14.497
20795
20795
null
75497
1
null
null
2
83
I'm currently studying the [Vasicek model](https://en.wikipedia.org/wiki/Vasicek_model) of the [short interest rate](https://en.wikipedia.org/wiki/Short-rate_model) $$dr_t=a(\mu-r_t)dt+\sigma dW_t$$ I know how to solve this stochastic differential equation (SDE) and how to find expectation and variance of $r_t$. Then I wanted to find the function to describe the evolution of the price $B(r_t,t)$ of a zero-coupon bond. I've seen you can use Ito's formula to obtain this differential equation: $$\frac{\partial B}{\partial t}+\frac{\sigma^2}{2}\frac{\partial^2 B}{\partial r^2}+(a(\mu-r)-\lambda\sigma)\frac{\partial B}{\partial r}-rB=0 \tag{1}$$ where $\lambda$ is the market price of risk (for reference, check pages 391-392 of Yue-Kuen Kwok's [Mathematical Models of Financial Derivatives](https://accelerator086.github.io/accelerator086-Blogs-Books/Mathematical%20Models%20of%20Financial%20Derivatives%20-%20Yue%20Kuen%20Kwok.pdf) [PDF]). Other articles give this equation (sometimes considering $\lambda=0$) and some give the solution in a closed form. Up to here I'm okay. Then I need the Green's function for this equation so I saw that the Vasicek model is a particular Ornstein–Uhlenbeck process with an additional drift term: the classic Ornstein–Uhlenbeck process $dr_t=-ar_tdt+\sigma dW_t$ can also be described in terms of a probability density function, $P(r,t)$, which specifies the probability of finding the process in the state $r$ at time $t$. This function satisfies the Fokker–Planck equation $$\frac{\partial P}{\partial t}=\frac{\sigma^2}{2}\frac{\partial^2 P}{\partial r^2}+a\frac{\partial (rP)}{\partial r} \tag{2}$$ The transition probability, also known as the Green's function, $P(r,t\mid r',t')$ is a Gaussian with mean $r'e^{-a(t-t')}$ and variance $\frac {\sigma^2}{2a}\left(1-e^{-2a(t-t')}\right)$: $$P(r,t\mid r',t')={\sqrt {\frac {a }{\pi \sigma^2(1-e^{-2a (t-t')})}}}\exp \left[-{\frac {a}{\sigma^2}}{\frac {(r-r'e^{-a (t-t')})^{2}}{1-e^{-2a (t-t')}}}\right] \tag{3} $$ This gives the probability of the state $r$ occurring at time $t$ given initial state $r'$ at time $t′<t$. Equivalently, $P(r,t\mid r',t')$ is the solution of the Fokker–Planck equation with initial condition $P(r,t')=\delta(r-r')$. My aim is to test some numerical methods on this model in order to extend them on the CIR model later so I need the Green's function of this Vasicek model and the corrisponding differential equation (if equation (1) is not correct). --- My try I tried to correlate equation (1) and (2) by adding the missing drift term to the O-U process and considering $\lambda=0$ in (1) but I get $-aP$ in (2) and not $-rP$ as it is in (1) (also the signs are misplaced). Then I thought that maybe I should try to correlate not the forward equation (2) to (1) but the backward Kolmogorov equation (which in this case is exactly equation (1) but without the term $-rB$). However that would require to get rid of the term $-rB$ in (1) but I don't think this is possible since $B$ is a function of $r$. This is why I think correlating equation (1) with the equation (2) or his bacward Kolmogorov version is not possible. Second attempt was then changing the Green's function according to the new O-U process, the one that matched the Vasicek model (the term $r'e^{-a(t-t')}$ is changed to the expected value of the Vasicek model $\mu+[r'-\mu]e^{-a(t-t')}$), and since this solves backward Kolmogorov, which is (1) without the term $-rB$, maybe I can just adjust this by a multiplying factor so that this solves (1) too. The reasons I have are that: - I checked on MATLAB the new Green's function and it seems to solve Fokker-Planck forward and Kolmogorov backward; also gives $1$ when integrated in $(r,t)\in\mathbb{R}\times[0,1]$ (with $r'=r_0$ and $t'=0$) and in $r\in\mathbb{R}$ (with $t=1$, $r'=r_0$ and $t'=0$) [so it seems to be correct]; - I plotted the surface solution given in close form on the articles and it matches perfectly with the integral solution $$ V(r,t) = \int_{r_{\min}}^{r_{\max}}e^{irr'}P(r,t\mid r',0)dr'$$ where $r_{\min}$ and $r_{\max}$ are chosen to be and interval around the expect value of $r_t$ of radius five times the variance of $r_t$ [so it seems the multiplying factor is $e^{irr'}$ but I still don't know why...]. NOTE_1: the reason why I tryed the term $e^{irr'}$ is that it is the new initial condition you get for $P(r,t\mid r',t')$ if you use the Fourier transform on equation (2), in the place of $\delta(r-r')$ (for reference check page 34 of [this](https://userswww.pd.infn.it/%7Eorlandin/fisica_sis_comp/fokker_planck.pdf)). NOTE_2: I also tryed the term $e^{1r'(t-0)} and it seems to work too... now I'm getting a bad feeling, maybe I've messed up with the coding part?
Bond-pricing under the Vasicek short rate model
CC BY-SA 4.0
null
2023-05-10T12:02:06.487
2023-05-12T10:29:18.383
2023-05-12T10:29:18.383
67362
67362
[ "interest-rates", "stochastic-processes", "short-rate", "vasicek", "ornstein-uhlenbeck" ]
75498
2
null
75488
2
null
Highly speculative and would require a decent degree of domain knowledge, but I would guess that two or more adversarial agents competing against one another in a properly-configured financial context would converge towards a Nash equilibrium relatively quickly.
null
CC BY-SA 4.0
null
2023-05-10T15:43:54.657
2023-05-10T15:43:54.657
null
null
67363
null
75499
1
75511
null
4
79
I have bootstrapped a curve using several depo and swap rates and am trying to use that curve to get the NPV of a swap over a period of time. The generation of prices iteratively through time is incredibly slow. Given that i'm only pricing over a 4 month period I wouldn't expect it to take 30 minutes. Am I doing something silly here? I saw a previous post commenting on a bug in the SWIG wrapper from python to c++, but according to one of the project maintainers it was patched years ago. See the below: ``` test_curve.referenceDate() == Date(4,1,2023) ``` Engine creation ``` ts = ql.RelinkableYieldTermStructureHandle() yts.linkTo(test_curve) engine = ql.DiscountingSwapEngine(yts) ``` Swap Definition and Creation ``` swapTenor = ql.Period('1Y') overnightIndex = ql.Sofr(yts) fixedRate = 0.01 ois_swap = ql.MakeOIS(swapTenor, overnightIndex, fixedRate, pricingEngine=engine, discountingTermStructure=yts) ``` NPV Generation ``` new_prices = [] instance = ql.Settings.instance() start_date = ql.Date(1,1,2024) success_counter = 0 while date < start_date: # Update eval date in sim instance.evaluationDate = date price = ois_swap.NPV() new_prices.append(price) # Increment date forward date += ql.Period('1D') new_curve = test_model.get_curve_by_date(date.to_date().strftime('%Y-%m-%d')) count = 0 # Check for new_curve to exist while new_curve is None: date += ql.Period('1D') new_curve = test_model.get_curve_by_date(date.to_date().strftime('%Y-%m-%d')) count += 1 if count == 100: break yts.linkTo(new_curve) engine = ql.DiscountingSwapEngine(yts) overnightIndex = ql.Sofr(yts) ois_swap = ql.MakeOIS(swapTenor, overnightIndex, fixedRate, pricingEngine=engine, discountingTermStructure=yts, effectiveDate=ql.Date(2,1,2024)) ``` The maturity date on the swap is set to May 2024. Thanks!
Quantlib Slow valuation of ois_swap on multiple eval days
CC BY-SA 4.0
null
2023-05-10T16:47:59.447
2023-05-12T18:35:16.167
2023-05-11T14:19:17.003
35442
35442
[ "programming", "quantlib", "interest-rate-swap", "ois-swaps" ]
75500
1
null
null
3
187
We know that 2 strategies can give the same Sharpe Ratio, but with different Maximum Drawdown. I computed myself these 2 strategies having the same cumulative return and SR, but with considerably different Max Drawdown to highlight this : ![](https://i.stack.imgur.com/lFbVF.png) I am currently optimising my strategy parameters with either one of these 2 measures (SR and MDD), but the loss function needs to output one single number (a final utility function). How can I "mix" these 2 informations using input utility that is for example : "I want the better Sharpe Ratio but with Max Drawdown not exceeding 20%" Is there a standard approach or measure that can mix these 2 informations ? i.e., both the risk-adjusted moment based measure (Sharpe Ratio, or other ratio accounting for higher order moments) and the measure that takes into account the order in which the returns occur (MDD, or Ulcer Index) EDIT : I have an idea: maybe we could compute an average of the different Sharpe Ratio that would give daily returns dist., 2-days returns dist., 3 days returns dist. etc. This "Sharpe Ratio average" would take into account the order in which the return occur over time because, even if the chart above gives the same SR for daily returns, the standard deviation of 3-months return is much lower for Strategy A than for strategy B. This would lead to an "Average Sharpe Ratio" that is in favor of Strat. A. Is this intuition a common practice that I don't know ? EDIT 2: ACF of biased strategy (B) shows significant autocorrelation for several lags, while ACF of A shows 0 lag autocorrelated: ![](https://i.stack.imgur.com/idCAa.png)
Mixing Max Drawdown and Sharpe Ratio in a single utility function : is there a standard approach?
CC BY-SA 4.0
null
2023-05-10T17:03:12.457
2023-05-24T10:01:21.313
2023-05-22T15:00:55.330
63143
63143
[ "sharpe-ratio", "maximum-drawdown" ]
75501
1
null
null
0
136
Consider a pair of American and European puts with the same specifications except the former has the continuous early exercise right. Has anyone plotted the Gamma's of both as functions of the underlying price and time to expiry for the underlying greater than the critical exercise price? Is the American put Gamma necessarily greater than or equal to that of the European counterpart in this domain? I would like a mathematical proof if it is true. I suspect the negative answer may predominantly come from the region where the underlying is close to and above the critical exercise price.
Is American put Gamma always greater than the European one in the non-early-exercise domain?
CC BY-SA 4.0
null
2023-05-10T22:42:10.207
2023-05-12T11:31:05.307
2023-05-11T16:19:22.937
6686
6686
[ "options", "american-options", "gamma" ]
75505
1
null
null
2
94
I have a very detailed dataset - for each minute I can see 3 best bid and ask prices with associated quantities. Which measure of volatility would you use in such dataset? Some volatility measures use only the close price; Garman-Klass uses Open, low, high and close; but here it is much more detailed. Here I would like a number which tells me how volatile day it was. I am thinking about simple standard deviation of the mid-price for each day. Are there some better estimates? Sorry if the question is obvious - I am not an expert in finance. Thanks!
Which volatility measure would you use for intraday minute data?
CC BY-SA 4.0
null
2023-05-11T08:40:59.390
2023-05-11T11:56:10.187
null
null
62001
[ "volatility" ]
75506
2
null
75505
0
null
Standard deviation is typically computed on the return distribution, not the price itself. You should maybe : - (1) backtest your strategy with a given initial equity - (2) each time you place a market order, you cross the order book with the 3 prices and sizes you talked about. You can then compute the price all inclusive (i.e., including bid ask spread, order book layers, and potentially broker fees). This will be more precise than the mid price - (3) compute the return of each minute - (4) compute the standard deviation of the return distribution, and maybe annualize it by multiplying by $\sqrt{365}$ Ps: if you really want to compute the standard deviation on the price itself and make any prediction out of it, you need to make sure your prices are stationary, otherwise moments (i.e., mean, standard dev etc) do no hold in time and your estimate may be useless.
null
CC BY-SA 4.0
null
2023-05-11T11:51:04.173
2023-05-11T11:56:10.187
2023-05-11T11:56:10.187
63143
63143
null
75507
2
null
75500
2
null
Why not use the Sortino Ratio instead of the Sharpe Ratio? This only uses downside deviation in its calculation and thus directly includes the idea of drawdown only in your loss function. In your given example, the black return line would have a higher Sortino Ratio value than that of the red return line, so you could directly optimise for this ratio. Response to Comments Re: "But here is the thing: the daily returns for the red line (both positive and negative) are exactly the same as for the black line" Well, yes, maybe in this example that is true, but I believe this is an unrealistic example. In my opinion the returns streams from two, different and unrelated sets of trading rules will not produce identical returns distributions. It is far more likely that the distributions will be different but the summary statistics will be indistinguishable. By way of example I present the following stylised chart[](https://i.stack.imgur.com/im8Y7.png)which somewhat follows the OP's chart with regard to beginning and ending values. Strategy A (Black line) is constructed from two different Gaussian distributions, one for positive returns (mean = 40, std = 1) and the other for negative returns (mean = -0.25, std = 0.25) and sorted to produce a highly desirable "stair stepping" equity curve with minimal drawdowns. Strategy B (Red line) is another set of Gaussian returns with mean and standard deviation equal to that of the combined returns of strategy A and sorted so that all negative returns occur first for a large drawdown, followed by an all positive returns drawup. The summary statistics for these are: ``` A_mean_return = 11.323 A_std_return = 18.236 A_downside_deviation_return = 0.2205 Sharpe_A = 0.6209 Sortino_A = 51.354 B_mean_return = 12.024 B_std_return = 19.134 B_downside_deviation_return = 10.714 Sharpe_B = 0.6284 Sortino_B = 1.1223 ``` The Sharpe Ratios are not exactly the same due to the nature of the random generation of the returns, but are similar enough to be statistically indistinguishable. However, the Sortino Ratio does clearly distinguish strategy A as being the more desirable. If, by some fluke, your different systems produce identical Sharpe and Sortino Ratios then you are running into the sort of problem that is illustrated by [Anscombe's Quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet) whereby you will have to resort to "graphical" methods. To my mind, the simplest way would be a gain-to-pain ratio calculated thus: Total_Gain / Max_Drawdown For the chart above these values are ``` A_gain_pain_ratio = 149.81 B_gain_pain_ratio = 3.7397 ``` which, obviously, also shows that strategy A is the better one without needing classical statistical measures to tell us this fact. Response to comments, part 2 My strategy A does indeed have drawdowns and the [Octave](https://octave.org/)/MATLAB code given below should enable you to replicate the above and see for yourself. ``` pkg load statistics ; ## Create Strategy A returns x_d = normrnd( -0.25 , 0.25 , 250 , 1 ) ; ## drawdown distribution x_u = normrnd( 40 , 1 , 100 , 1 ) ; ## drawup distribution A = [ x_d(1:100) ; x_u(1:50) ; x_d(101:200) ; x_u(51:75) ; x_d(201:250) ; x_u(75:100) ] ; ## distributions combined A_equity_value = cumsum( [ 10000 ; A ] ) ; A_mean_return = mean( A ) ; A_std_return = std( A ) ; A_downside_deviation_return = std( x_d ) ; Sharpe_A = A_mean_return / A_std_return ; Sortino_A = A_mean_return / A_downside_deviation_return ; ## Create Strategy B returns B = normrnd( A_mean_return , A_std_return , 350 , 1 ) ; B = sort( B ) ; B_equity_value = cumsum( [ 10000 ; B ] ) ; B_ix = find( B < 0 ) ; B_mean_return = mean( B ) ; B_std_return = std( B ) ; B_downside_deviation_return = std( B( B_ix ) ) ; Sharpe_B = B_mean_return / B_std_return ; Sortino_B = B_mean_return / B_downside_deviation_return ; ## Gain pain ratio A_gain_pain_ratio = ( A_equity_value( end ) - A_equity_value( 1 ) ) / max( cummax( A_equity_value ) - A_equity_value ) ; B_gain_pain_ratio = ( B_equity_value( end ) - B_equity_value( 1 ) ) / ( B_equity_value( 1 ) - min( B_equity_value ) ) ; if ( ishandle( 1 ) ) clf( 1 ) ; endif figure( 1 ) ; h1 = axes( 'position' , [ 0.02 , 0.02 , 0.97 , 0.95 ] ) ; plot( A_equity_value , 'k' , 'linewidth' , 2 , B_equity_value , 'r' , 'linewidth' , 2 ) ; title( "Comparison of 2 Strategies' Equity Values Over Time with 'Similar' Moments and Sharpe Ratios" , "fontsize" , 15 ) ; legend( 'Strategy A Equity Value' , 'Stategy B Equity Value' , 'location' , 'northwest' , 'fontsize' , 15 ) ; ``` Are you sure that Strat A and B equity over time come from the same return distribution? No, they do not, but that is the point I'm trying to make and the code makes this explicit. You can have different distributions of returns but the summary statistics of these different distributions can be (almost) identical or indistinguishable from each other, making said summary statistics completely uninformative with respect to choosing between the underlying trading systems. If you don't like Return / MDD you could try something like Return / Average size of all individual DDs. The denominator in this expression can be adjusted in many ways, e.g. average plus 1 or 2 x standard deviation of DDs.
null
CC BY-SA 4.0
null
2023-05-11T11:58:46.497
2023-05-24T10:01:21.313
2023-05-24T10:01:21.313
252
252
null
75510
1
75532
null
3
127
I'm trying to use Python to give me more information about drawdowns than just the max drawdown and the duration of the max drawdown. I would like to determine the number of drawdowns that have occurred (beyond a certain day count threshold), the average drawdown, and the average drawdown length. I found [this](https://quant.stackexchange.com/questions/55130/global-maximum-drawdown-and-maximum-drawdown-duration-implementation-in-python) question with an answer about the max draw, and the length of the max draw, but after reading the comments, I'm unsure what to make of it. I also found [this](https://quant.stackexchange.com/questions/57703/implementation-of-maximum-drawdown-in-python-working-directly-with-returns/74134#74134) question which seems to give a different max drawdown, so I'm a bit confused. I think the second one is what I'm looking for, but I don't want a max; I want an average drawdown that has lasted more than a number of days (say five days). My dataset is a Pandas dataframe with prices. This is what I have so far, but now I'm stuck on how to proceed: ``` def avg_dd(df, column='close'): df['simple_ret'] = df[column].pct_change().fillna(0) df['cum_ret'] = (1 + df['simple_ret']).cumprod() - 1 df['nav'] = ((1 + df['cum_ret']) * 100).fillna(100) df['hwm'] = df['nav'].cummax() df['dd'] = df['nav'] / df['hwm'] - 1 ``` From here, my idea was to use the `hwm` column as an index that increments each time it hits a new high, and the distance between them was the length of that temporary drawdown. Does anyone have a source or reference that can help me out?
Average drawdown and average drawdown length in Python
CC BY-SA 4.0
null
2023-05-11T15:43:39.433
2023-05-13T13:58:24.327
null
null
42222
[ "programming", "portfolio-management", "maximum-drawdown", "drawdown" ]
75511
2
null
75499
2
null
After reading the documentation more closely and some scenario testing, I was able to determine that `ql.Settings.instance().evaluationDate = date` was the culprit. It seems that updating the `evaluationdate` causes a refresh of ALL instantiated objects within QuantLib that are related to that evaluationDate. I had instantiated a dataframe within my `test_model` class and pre-built all of the bootstrapped curves I was intending to use, which resulted in the creation of many swap and depo helper objects, all of which would be updated on the `evaluationDate` change. For added color: Python destroys objects when their reference counter reaches 0. So, I simply made all of these objects transient within event loop so that I am incrementing, at most, 1 curve and 1 swap object at each step forward. The solution is not lightning fast, since I am bootstrapping a curve at each step, but it's working. ``` while date < effectiveDate: instance.evaluationDate = date curve = bootstrap_model.get_curve_by_date(date.to_date().strftime('%Y-%m-%d'), depo=True, swaps=True) if curve is None: date += ql.Period('1D') else: yts = ql.RelinkableYieldTermStructureHandle() yts.linkTo(curve) engine = ql.DiscountingSwapEngine(yts) overnightIndex = ql.Sofr(yts) ois_swap = ql.MakeOIS(swapTenor, overnightIndex, fixedRate, pricingEngine=engine, discountingTermStructure=yts, effectiveDate=effectiveDate) swap_price = ois_swap.NPV() new_prices.append(swap_price) ```
null
CC BY-SA 4.0
null
2023-05-11T15:48:48.683
2023-05-11T15:48:48.683
null
null
35442
null
75512
1
null
null
0
30
I was reading the book Stochastic Calculus for Finance II by Shreve and I read the proof that the forward price for the underlying $S$ at time $t$ with maturity $T$ is given by $$ For_S(t,T) = \frac{S(t)}{B(t,T)}, $$ where $S(t)$ is the Stock at time $t$ and $B(t,T)$ is the price of a ZCB at time $t$ maturing at time $T$. The proof assumes no underlying model and simply argues that if the price would not fulfil this equation, we would have an arbitrage opportunity. On page 242 and 243, the author (upon deriving the notion of futures contracts) calculates the value of a long forward position (startet at $t_k$) at some future date $t_j > t_k$. Using the risk neutral pricing formula he derives $$ V_{k,j} = S(t_j) - S(t_k) \cdot \frac{B(t_j,T)}{B(t_k,T)}. $$ I was curious if I can derive this equation using a no arbitrage argument: So, assume that $V_{k,j} > S(t_j) - S(t_k) \cdot \frac{B(t_j,T)}{B(t_k,T)}$. If this is the case, I could "borrow" a long forward position and sell it, yielding $V_{k,j}$. I can enter a new forward contract (at time $t_j$), for which I have to pay $S(t_j)/B(t_j,T)$. Finally I borrow $$ \frac{S(t_k)}{B(t_k,T)} - \frac{S(t_j)}{B(t_j,T)} + \frac{S(t_j)}{B(t_j,T)^2} $$ in Bonds being worth $$ \frac{S(t_k)}{B(t_k,T)}B(t_j,T) - S(t_j) + \frac{S(t_j)}{B(t_j,T)} $$ today and sell it. Effectively I gained $V_{k,j} - \frac{S(t_j)}{B(t_j,T)} + \frac{S(t_k)}{B(t_k,T)}B(t_j,T) - S(t_j) + \frac{S(t_j)}{B(t_j,T)} = V_{k,j} + \frac{S(t_k)}{B(t_k,T)}B(t_j,T) - S(t_j)> 0$ at time $t_j$. At time $T$ I still owe the $S(T) - S(t_k)/B(t_k,T)$ from the forward I shorted and I owe $\frac{S(t_k)}{B(t_k,T)} - \frac{S(t_j)}{B(t_j,T)} + \frac{S(t_j)}{B(t_j,T)^2}$ from the bonds I borrowed. From the forward I get $S(T) - S(t_j)/B(t_j,T)$. In total I owe $$ S(T) - S(t_k)/B(t_k,T) + \frac{S(t_k)}{B(t_k,T)} - \frac{S(t_j)}{B(t_j,T)} + \frac{S(t_j)}{B(t_j,T)^2} - S(T) + S(t_j)/B(t_j,T)\\ = \frac{S(t_j)}{B(t_j,T)^2} > 0. $$ I tried to vary my strategy at time $t_j$, however I am not able to make an riskfree profit. Where is my mistake or is it simply not possible to show this by a no-arbitrage argument without risk neutral pricing? Thanks in advance!
No arbitrage argument for the price process of a forward contract
CC BY-SA 4.0
null
2023-05-11T16:04:08.703
2023-05-11T16:04:08.703
null
null
62056
[ "risk-neutral-measure", "forward", "no-arbitrage-theory", "proof" ]
75513
1
null
null
1
65
I am trying to price SOFR swaps in two different dates (the same swaps, just different curves and dates) This are my initial parameters: ``` curve_date =ql.Date (9,5,2022) ql.Settings.instance().evaluationDate = curve_date sofr = ql.Sofr() #overnightIndex swaps_calendar = ql.UnitedStates(ql.UnitedStates.FederalReserve)#calendar day_count = ql.Actual360() #day count convention settlement_days = 2 #t+2 settlement convention for SOFR swaps ``` this is the SOFR curve as of May 9th, 2022: |index |ticker |n |tenor |quote | |-----|------|-|-----|-----| |0 |USOSFR1Z CBBT Curncy |1 |1 |0.79 | |1 |USOSFR2Z CBBT Curncy |2 |1 |0.81 | |2 |USOSFR3Z CBBT Curncy |3 |1 |0.79 | |3 |USOSFRA CBBT Curncy |1 |2 |0.8 | |4 |USOSFRB CBBT Curncy |2 |2 |1.01 | |5 |USOSFRC CBBT Curncy |3 |2 |1.19 | |6 |USOSFRD CBBT Curncy |4 |2 |1.34 | |7 |USOSFRE CBBT Curncy |5 |2 |1.47 | |8 |USOSFRF CBBT Curncy |6 |2 |1.61 | |9 |USOSFRG CBBT Curncy |7 |2 |1.71 | |10 |USOSFRH CBBT Curncy |8 |2 |1.82 | |11 |USOSFRI CBBT Curncy |9 |2 |1.93 | |12 |USOSFRJ CBBT Curncy |10 |2 |2.01 | |13 |USOSFRK CBBT Curncy |11 |2 |2.09 | |14 |USOSFR1 CBBT Curncy |12 |2 |2.17 | |15 |USOSFR1F CBBT Curncy |18 |2 |2.48 | |16 |USOSFR2 CBBT Curncy |2 |3 |2.62 | |17 |USOSFR3 CBBT Curncy |3 |3 |2.69 | |18 |USOSFR4 CBBT Curncy |4 |3 |2.72 | |19 |USOSFR5 CBBT Curncy |5 |3 |2.73 | |20 |USOSFR7 CBBT Curncy |7 |3 |2.77 | |21 |USOSFR8 CBBT Curncy |8 |3 |2.78 | |22 |USOSFR9 CBBT Curncy |9 |3 |2.8 | |23 |USOSFR10 CBBT Curncy |10 |3 |2.81 | |24 |USOSFR12 CBBT Curncy |12 |3 |2.83 | |25 |USOSFR15 CBBT Curncy |15 |3 |2.85 | |26 |USOSFR20 CBBT Curncy |20 |3 |2.81 | |27 |USOSFR25 CBBT Curncy |25 |3 |2.71 | |28 |USOSFR30 CBBT Curncy |30 |3 |2.6 | |29 |USOSFR40 CBBT Curncy |40 |3 |2.4 | |30 |USOSFR50 CBBT Curncy |50 |3 |2.23 | This data is stored in a df called:`swap_data` and I use it to build tuples (rate, (tenor)) for the `OISRateHelper` objects ``` swaps= [(row.quote,(row.n, row.tenor)) for row in swap_data.itertuples(index=True, name='Pandas')] def zero_curve(settlement_days,swaps,day_count): ois_helpers = [ ql.OISRateHelper(settlement_days, #settlementDays ql.Period(*tenor), #tenor -> note that `tenor` in the list comprehension are (n,units), so uses * to unpack when calling ql.Period(n, units) ql.QuoteHandle(ql.SimpleQuote(rate/100)), #fixedRate sofr) #overnightIndex for rate, tenor in swaps] #for now I have chosen to use a logCubicDiscount term structure to ensure continuity in the inspection sofrCurve = ql.PiecewiseLogCubicDiscount(settlement_days, #referenceDate swaps_calendar,#calendar ois_helpers, #instruments day_count, #dayCounter ) sofrCurve.enableExtrapolation() #allows for extrapolation at the ends return sofrCurve ``` using this function I build a zero curve, a sofr object linked to that curve and a swap pricing engine ``` sofrCurve = zero_curve(settlement_days,swaps,day_count) valuation_Curve = ql.YieldTermStructureHandle(sofrCurve) sofrIndex = ql.Sofr(valuation_Curve) swapEngine = ql.DiscountingSwapEngine(valuation_Curve) ``` With this I create OIS swaps and price them using this curve to ensure that it's correctly calibrated: ``` effective_date = swaps_calendar.advance(curve_date, settlement_days, ql.Days) notional = 10_000_000 ois_swaps = [] for rate, tenor in swaps: schedule = ql.MakeSchedule(effective_date, swaps_calendar.advance(effective_date, ql.Period(*tenor)), ql.Period('1Y'), calendar = swaps_calendar) fixedRate = rate/100 oisSwap = ql.MakeOIS(ql.Period(*tenor), sofrIndex, fixedRate, nominal=notional) oisSwap.setPricingEngine(swapEngine) ois_swaps.append(oisSwap) ``` the NPVs on all the swaps is zero so they seem. I went a step further to confirm that I was getting the PV of the legs correctly by constructing a function that yields a table with the leg relevant information ``` def leg_information(effective_date, day_count,ois_swap, leg_type, sofrCurve): leg_df=pd.DataFrame(columns=['date','yearfrac','CF','discountFactor','PV','totalPV']) cumSum_pv= 0 leg = ois_swap.leg(0) if leg_type == "fixed" else ois_swap.leg(1) for index, cf in enumerate(leg): yearfrac = day_count.yearFraction(effective_date,cf.date()) df = sofrCurve.discount(yearfrac) pv = df * cf.amount() cumSum_pv += pv row={'date':datetime.datetime(cf.date().year(), cf.date().month(), cf.date().dayOfMonth()),'yearfrac':yearfrac, 'CF':cf.amount() ,'discountFactor':df,'PV':pv,'totalPV':cumSum_pv} leg_df.loc[index]=row return leg_df ``` Then I proceeded to view the fixed and float legs for the 30y swap: ``` fixed_leg = leg_information(effective_date, day_count,ois_swaps[-3], 'fixed', sofrCurve) fixed_leg.tail() ``` |date |yearfrac |CF |discountFactor |PV |totalPV | |----|--------|--|--------------|--|-------| |2048-05-11 |26.38 |263343.89 |0.5 |132298.29 |4821684 | |2049-05-11 |27.39 |264067.36 |0.49 |130173.38 |4951857.39 | |2050-05-11 |28.41 |264067.36 |0.48 |127789.7 |5079647.08 | |2051-05-11 |29.42 |264067.36 |0.48 |125514.12 |5205161.2 | |2052-05-13 |30.44 |266237.78 |0.47 |124346.16 |5329507.36 | ``` float_leg = leg_information(effective_date, day_count,ois_swaps[-3], 'Float', sofrCurve) float_leg.tail() ``` |date |yearfrac |CF |discountFactor |PV |totalPV | |----|--------|--|--------------|--|-------| |2048-05-11 |26.38 |194630.64 |0.5 |97778.23 |4976215.78 | |2049-05-11 |27.39 |191157.4 |0.49 |94232.04 |5070447.82 | |2050-05-11 |28.41 |186532.08 |0.48 |90268.17 |5160715.99 | |2051-05-11 |29.42 |181300.34 |0.48 |86174.05 |5246890.04 | |2052-05-13 |30.44 |176892.09 |0.47 |82617.32 |5329507.36 | Also, the DV01 on the swap lines up with what I see in bloomberg: `ois_swaps[-3].fixedLegBPS()` = $20462.68. So at this point, I feel comfortable with what the swap object because it seems to match what I see on Bloomberg using SWPM Now, when I change the date: ``` curve_date =ql.Date (9,5,2023) ql.Settings.instance().evaluationDate = curve_date effective_date = swaps_calendar.advance(curve_date, settlement_days, ql.Days) ``` and pull the new curve: |index |ticker |n |tenor |quote | |-----|------|-|-----|-----| |0 |USOSFR1Z CBBT Curncy |1 |1 |5.06 | |1 |USOSFR2Z CBBT Curncy |2 |1 |5.06 | |2 |USOSFR3Z CBBT Curncy |3 |1 |5.06 | |3 |USOSFRA CBBT Curncy |1 |2 |5.07 | |4 |USOSFRB CBBT Curncy |2 |2 |5.1 | |5 |USOSFRC CBBT Curncy |3 |2 |5.11 | |6 |USOSFRD CBBT Curncy |4 |2 |5.11 | |7 |USOSFRE CBBT Curncy |5 |2 |5.09 | |8 |USOSFRF CBBT Curncy |6 |2 |5.06 | |9 |USOSFRG CBBT Curncy |7 |2 |5.03 | |10 |USOSFRH CBBT Curncy |8 |2 |4.97 | |11 |USOSFRI CBBT Curncy |9 |2 |4.92 | |12 |USOSFRJ CBBT Curncy |10 |2 |4.87 | |13 |USOSFRK CBBT Curncy |11 |2 |4.81 | |14 |USOSFR1 CBBT Curncy |12 |2 |4.74 | |15 |USOSFR1F CBBT Curncy |18 |2 |4.28 | |16 |USOSFR2 CBBT Curncy |2 |3 |3.96 | |17 |USOSFR3 CBBT Curncy |3 |3 |3.58 | |18 |USOSFR4 CBBT Curncy |4 |3 |3.39 | |19 |USOSFR5 CBBT Curncy |5 |3 |3.3 | |20 |USOSFR7 CBBT Curncy |7 |3 |3.24 | |21 |USOSFR8 CBBT Curncy |8 |3 |3.23 | |22 |USOSFR9 CBBT Curncy |9 |3 |3.24 | |23 |USOSFR10 CBBT Curncy |10 |3 |3.24 | |24 |USOSFR12 CBBT Curncy |12 |3 |3.27 | |25 |USOSFR15 CBBT Curncy |15 |3 |3.3 | |26 |USOSFR20 CBBT Curncy |20 |3 |3.28 | |27 |USOSFR25 CBBT Curncy |25 |3 |3.2 | |28 |USOSFR30 CBBT Curncy |30 |3 |3.12 | |29 |USOSFR40 CBBT Curncy |40 |3 |2.93 | |30 |USOSFR50 CBBT Curncy |50 |3 |2.73 | store the above data in `swap_data` and proceed again to recalibrate the zero curve: ``` swaps= [(row.quote,(row.n, row.tenor)) for row in swap_data.itertuples(index=True, name='Pandas')] sofrCurve_2023 = zero_curve(settlement_days,swaps,day_count) valuation_Curve2023 = ql.YieldTermStructureHandle(sofrCurve_2023) sofrIndex2023 = ql.Sofr(valuation_Curve2023) swapEngine2023 = ql.DiscountingSwapEngine(valuation_Curve2023) ois_swaps[-3].setPricingEngine(swapEngine2023) ``` and try to get the NPV of the swap ``` ois_swaps[-3].NPV() ``` It yields a value of $60968.42 . I know that the NPV after changing the date forward is wrong. I did I simple calculation: the 30y swap rate moved from 2.60 to 3.12 ( I know it's a 29y swap 1 year later, but for illustration purposes the P&L is more less -20k* 52bps = -$1,040,000. and If I try to view the floating leg by calling: ``` float_leg = leg_information(effective_date, day_count,ois_swaps[-3], 'Float', sofrCurve) float_leg.tail() ``` I get the following: ``` RuntimeError: Missing SOFRON Actual/360 fixing for May 11th, 2022 ``` Which makes me think that I need to relink to the OvernightIndex to `sofrIndex2023` on that 30y swap (I just don't know how to do this, I have looked at the documentation and there's no hints about how to do this) So what am I doing wrong?
Quantlib SOFR swap repricing across 2 different dates
CC BY-SA 4.0
null
2023-05-11T17:26:02.887
2023-05-11T21:44:28.167
2023-05-11T18:17:53.147
66847
66847
[ "quantlib", "pricing", "sofr", "ois-swaps" ]
75514
1
null
null
2
35
Apparently there's a [new IRS rule](https://www.irs.gov/individuals/international-taxpayers/partnership-withholding) that can be summarized as "starting January 1st, 2023, investors who are not U.S. taxpayers must withhold 10% of proceeds from the sale of PTPs (Publicly Traded Partnerships) which do not have an exception." While I'm normally used to taxes on income or profit, this seems to be a tax on sale, so if buy and sell repeatedly PTPs, I am liable for 10% of the transaction value, possibly racking up more tax than my entire equity. Is this correct? Or is the amount just "held", and there's a way to get back part of the money at a later date?
PTP 10% Withholding
CC BY-SA 4.0
null
2023-05-11T18:14:56.063
2023-05-11T18:37:13.693
2023-05-11T18:37:13.693
16148
43597
[ "interest-rate-swap", "pnl", "tax" ]
75516
2
null
75513
1
null
I realise the question is specifically about Quantlib, but I wanted to highlight an answer using Rateslib for Python, the answer is around 1.05mm USD as you predicted. Setup your initial curve (note I have ignored most swaps to just approximate the 30y test swap) ``` from rateslib import Curve, IRS, dt, Solver curve = Curve( nodes={ dt(2022, 5, 9): 1.0, dt(2047, 5, 9): 1.0, dt(2052, 5, 9): 1.0, }, id="sofr" ) sofr_kws = dict( payment_lag=2, frequency="A", convention="act360", effective=dt(2022, 5, 11), calendar="nyc", curves="sofr", ) instruments = [ IRS(termination="25y", **sofr_kws), IRS(termination="30y", **sofr_kws), ] solver = Solver( curves=[curve], instruments=instruments, s=[2.71, 2.60], instrument_labels=["25Y", "30Y"], id="SOFR" ) ``` Then we created your tess IRS and check its NPV. ``` >>> test_irs = IRS(termination="30Y", **sofr_kws, fixed_rate=2.60, notional=10e6) >>> test_irs.npv(solver=solver) <Dual: 0.000932, ('sofr0', 'sofr1', 'sofr2'), [ 7464130.57816169 -4800825.36772211 -10833705.54868424]> ``` Then we build a second curve with new dates and rates. ``` curve2 = Curve( nodes={ dt(2023, 5, 9): 1.0, dt(2048, 5, 9): 1.0, dt(2053, 5, 9): 1.0, }, id="sofr" ) sofr_kws2 = dict( payment_lag=2, frequency="A", convention="act360", effective=dt(2023, 5, 11), calendar="nyc", curves="sofr", ) instruments = [ IRS(termination="25y", **sofr_kws2), IRS(termination="30y", **sofr_kws2), ] solver2 = Solver( curves=[curve2], instruments=instruments, s=[3.2, 3.12], instrument_labels=["25Y", "30Y"], id="SOFR" ) ``` We need to add the fixings for the test IRS since it has a payment settlement coming up. I took a look at historical fixings and the average over last year has been about 3%. ``` test_irs = IRS(termination="30Y", **sofr_kws, fixed_rate=2.60, notional=10e6, leg2_fixings=3.0) test_irs.npv(solver=solver2) <Dual: 1,049,569.356238, ('sofr0', 'sofr1', 'sofr2'), [ 7596484.75971786 -6798185.22482419 -8777872.59764332]> ```
null
CC BY-SA 4.0
null
2023-05-11T21:44:28.167
2023-05-11T21:44:28.167
null
null
29443
null
75517
2
null
54526
0
null
Another example using FixedRateBondHelper on some US treasury data and following the code above: ``` import QuantLib as ql calc_date = ql.Date(5, 5, 2023) ql.Settings.instance().evaluationDate = calc_date settlement_days = 1 face_amount = 100 data = [ ('31-08-2021', '31-08-2023', 0.125, 98.43537), ('30-09-2021', '30-09-2023', 0.25, 98.13213), ('01-11-2021', '31-10-2023', 0.375, 97.82426), ('30-11-2021', '30-11-2023', 0.5, 97.56833), ('31-12-2021', '31-12-2023', 0.75, 97.36831), ('31-01-2022', '31-01-2024', 0.875, 97.16839), ('28-02-2022', '29-02-2024', 1.5, 97.35896), ('31-03-2022', '31-03-2024', 2.25, 97.79973), ('02-05-2022', '30-04-2024', 2.5, 97.88095), ('31-05-2022', '31-05-2024', 2.5, 97.77523), ('30-06-2022', '30-06-2024', 3.0, 98.2304), ('01-08-2022', '31-07-2024', 3.0, 98.21679), ('31-08-2022', '31-08-2024', 3.25, 98.53124), ('30-09-2022', '30-09-2024', 4.25, 99.87726), ('31-10-2022', '31-10-2024', 4.375, 100.14486), ('30-11-2022', '30-11-2024', 4.5, 100.42819), ('03-01-2023', '31-12-2024', 4.25, 100.1332), ('31-01-2023', '31-01-2025', 4.125, 100.0258), ('28-02-2023', '28-02-2025', 4.625, 101.04361), ('15-03-2022', '15-03-2025', 1.75, 95.98058), ] helpers = [] for issue_date, maturity, coupon, price in data: price = ql.QuoteHandle(ql.SimpleQuote(price)) issue_date = ql.Date(issue_date, '%d-%m-%Y') maturity = ql.Date(maturity, '%d-%m-%Y') schedule = ql.MakeSchedule(issue_date, maturity, ql.Period(ql.Semiannual)) day_count = ql.ActualActual(ql.ActualActual.Bond, schedule) helper = ql.FixedRateBondHelper(price, 1, 100, schedule, [coupon / 100], day_count) helpers.append(helper) yc = ql.PiecewiseLogCubicDiscount(calc_date, helpers, day_count) ```
null
CC BY-SA 4.0
null
2023-05-11T23:04:07.350
2023-05-24T02:56:22.587
2023-05-24T02:56:22.587
34997
34997
null
75521
2
null
75501
-1
null
I think the argument of continuity as suggested by the deleted post does apply. American options should be continuous in all their Greeks across “boundaries”, because it is a free boundary. Given continuity , the statement fails. For example the following diagram makes it pretty obvious. Not a math proof I [](https://i.stack.imgur.com/253Lw.png)
null
CC BY-SA 4.0
null
2023-05-12T11:31:05.307
2023-05-12T11:31:05.307
null
null
18388
null
75522
1
null
null
0
46
1.Assuming a one period economy with two assets in which cash flows are assigned certain probabilities, using the CAPM, we can derive the P0 given the E(CF) at t1. Within this distribution, we have idiosyncratic and systematic risk (total volatility). Traditionally, it is assumed that this stochastic process is stationary. 2.However, if the stock return distribution itself changes unexpectedly (e.g., probabilities, correlations, expected cash flows), there should obviously be a repricing of the stock. Is this an example of non-stationarity? Moreover, the price movement resulting from this repricing itself, is it also idiosyncratic or systematic risk (depending on its nature) or is it some other type of risk? Is it a "risk of change in parameters"? This new distribution can have a lower risk as a whole but also a much lower E(CF), resulting in a lower price despite lower ex-ante risk!
Non-stationarity and repricing as a source of idiosyncratic and systematic "risk"?
CC BY-SA 4.0
null
2023-05-12T13:12:30.603
2023-05-12T18:32:22.077
null
null
67387
[ "stochastic-volatility", "asset-pricing", "capm", "valuation", "stationarity" ]
75524
1
null
null
0
21
I am struggling with this concept of risk neutral probabilities. My understanding of how a risk neutral pricing framework works is as follows: (discrete, binomial lattice for simplicity) I do not know the actual or real world probability "P" of Upstate of say a risky bond. Say the price in the market (assuming it is correctly priced) is S1 and the risk free rate is 5%. Now i assume a world where the investors are risk averse, I calibrate my model to find "Q". And using Q i get my model price to equal the market price. The bond was risky, but i'd still use Rf (5%) as the discount rate. Why is risk-free rate the appropriate discount rate? I just inculcated the risk, should it not be the risk free rate plus risk premium? Is it only because this hypothetical world's risk neutral assumption forces us to use Rf? How exactly did we take the risk out of equation? I think we added it to the mix. As an aside, I'd be happy to hear your layman explanation of what we are trying to do under risk neutral valuation.
Risk Neutral Pricing - Why the Risk Free rate for Risky security (Intuition)
CC BY-SA 4.0
null
2023-05-12T15:08:29.997
2023-05-12T15:25:13.070
2023-05-12T15:25:13.070
67389
67389
[ "fixed-income", "risk", "risk-neutral-measure", "binomial-tree" ]
75525
2
null
75510
2
null
I've created a solution that hopefully works for you. Not sure if this is exactly what you had in mind. Anyways, first I'll create some random test data. This gives us a series that somewhat resembles a real asset: ``` import pandas as pd import numpy as np np.random.seed(1) rand = np.random.normal(size=750) delta_S = (0.05 * 1 / 252 + 0.2 * rand * np.sqrt(1 / 252)) df = pd.DataFrame(S, columns=["S"], index=pd.bdate_range("2010-01-01","2014-01-01")[:750]).add(1).cumprod().mul(100) ``` [](https://i.stack.imgur.com/NfvyR.png) Next we calculate the drawdown. I've excluded the first observation and then created a flag `is_dd` whenever the drawdown reaches 0. This is meant to delineate a drawdown period, i.e. each drawdown period comes to an end when we reach the zero mark. Next, I identify the drawdown regimes/periods using the `ne` operator and a time series shift. Finally, I remove observations that are just zero because they're not of interest here. ``` dd = df / df.cummax() - 1 dd = dd.iloc[1:] dd["is_dd"] = np.where(np.isclose(dd, 0), 1, 0) dd["regime"] = dd["is_dd"].ne(dd["is_dd"].shift(1)).cumsum() dd = dd[~np.isclose(dd["S"], 0)] dd.drop(columns="is_dd", inplace=True) ``` Next, you can run some analytics, like average drawdown, length etc. My solution for getting periods of at least 5 days is not very neat but it works: ``` def days_to_trough(x): t_date = x[x == x.min()].index return (t_date - x.index[0]).days def days_of_drawdown(x): return (x.index[-1] - x.index[0]).days atleast_5 = dd.groupby("regime")["S"].count().gt(5).replace(False, np.nan).dropna().index dd[dd["regime"].isin(atleast_5)].groupby("regime")["S"].agg([days_of_drawdown, "min", "mean", days_to_trough]).round(3).T ``` [](https://i.stack.imgur.com/PdggH.png) Note if you call the `min` function on this you'll get a max drawdown of 15% (see regime 51) which matches what you get by simple computation of the MDD on the series. It uses the value from 2011-04-14 and 2011-08-12 in my artificial time series and is calculated as `129.503421/152.829372 - 1`. Finally, a bit of viz: ``` fig, ax = plt.subplots(figsize=(13,5)) dd["S"].plot(title="MaxDD vs regimes", ax=ax) xpos = dd.reset_index().groupby("regime").first()["index"] for x in xpos: ax.axvline(x=x, color='r', linestyle='-', alpha=0.5) ``` Each vertical line delineates a new drawdown period: [](https://i.stack.imgur.com/6afyr.png) Hope this helps!
null
CC BY-SA 4.0
null
2023-05-12T15:50:56.317
2023-05-12T16:38:54.453
2023-05-12T16:38:54.453
31457
31457
null
75526
2
null
75522
1
null
Stationarity as a phenomenon arises from the time dimension. In a single period economy, there is no time dimension, so we cannot talk about stationarity.
null
CC BY-SA 4.0
null
2023-05-12T18:32:22.077
2023-05-12T18:32:22.077
null
null
19645
null
75528
1
null
null
2
61
It is my understanding that open interest option values on financial websites are a reflection of a snapshot value each day. Is anyone aware of methods for estimating intraday open interest, or aware of any financial data vendors that offer their estimations on this?
Methods for tracking option open interest intraday
CC BY-SA 4.0
null
2023-05-13T04:18:03.697
2023-05-13T04:18:03.697
null
null
67400
[ "options", "derivatives", "quant-trading-strategies", "algorithmic-trading", "trading-systems" ]
75529
1
75530
null
1
91
In [this](https://en.wikipedia.org/wiki/Brownian_model_of_financial_markets#Stocks) wikipedia page, we consider the following financial market [](https://i.stack.imgur.com/Aa4rW.png) The formulas for the stocks are given here [](https://i.stack.imgur.com/0Qx5F.png) And the gain process of a portfolio $\pi$ is defined such that [](https://i.stack.imgur.com/YKVVk.png) From what I understand, the first term of the formula of the gain process is due to the riskless asset, meaning that we consider undiscounted quantities (otherwise the riskless asset would not be considered in the expression of the gains I guess). But then the second term comes from the discounted formula of the risky assets. Hence it is a bit unclear for me what exactly are the computations behind this formula and whether we use discounted quantities or not. I thought that the formula for gain process was roughly given by \begin{equation} G(t) = \int_0^t \pi_r \frac{dS_r}{S_r} \end{equation} but this doesn't seem to correspond with Wikipedia. I would be glad if someone could explain a bit more about it, particularly since it is hard to find any reference for this or gain processes in general. Thank you in advance.
Confusion about the formula for gain process in a financial market
CC BY-SA 4.0
null
2023-05-13T09:24:03.053
2023-05-13T13:03:15.057
2023-05-13T09:36:30.963
60817
60817
[ "stochastic-processes", "brownian-motion", "portfolio", "geometric-brownian" ]
75530
2
null
75529
2
null
@nbbo2 Thank you very much for providing this useful reference, I had a look into it and I think I understand now :) For simplicity, let's take $A \equiv 0$, $\delta \equiv 0$ and $r(s) \equiv r$ (it is not very important anyway). Using undiscounted expression of the price process, one has that \begin{align} dG(t) &= \sum_i \pi_i(t) \frac{dS_i(t)}{S_i(t)} \\ &= \sum_i \pi_i(t) b_i(t) dt + \sum_i \pi_i(t) \sum_j \sigma_{ij}dW_j(t) \\ &= \sum_i \pi_i(t) (b_i(t) - r) dt + \sum_i \pi_i(t)rdt + \sum_i \pi_i(t) \sum_j \sigma_{ij}dW_j(t) \\ \end{align} But now since $\frac{\pi_i(t)}{G(t)}$ is the proportion of wealth invested in asset i at time t, it is clear that $\sum_i \frac{\pi_i(t)}{G(t)} = 1$ and thus \begin{align} \sum_i \pi_i(t)rdt &= \sum_i \frac{\pi_i(t)}{G(t)} G(t)rdt \\ &= G(t)rdt \end{align} Therefore we finally obtain \begin{align} dG(t) = \sum \pi_i(t) (b_i(t) - r) dt + G(t)rdt + \sum_i \pi_i(t) \sum_j \sigma_{ij}dW_j(t) \end{align} which is what is obtained in the Wikipedia article.
null
CC BY-SA 4.0
null
2023-05-13T12:39:12.233
2023-05-13T13:03:15.057
2023-05-13T13:03:15.057
60817
60817
null
75531
1
null
null
5
93
The company will have to pay out an amount of liabilities $13594$ at the moment $t=9$. In $t=0$ they want to cover it with 4-years zero-coupon bonds and yearly perpetual annuity that is due in arrears using immunization strategy (match the duration of portfolio assets with the duration of future liabilities). How much (in %) has to change share of perpetual annuity in portfolio in $t=1$ to keep portfolio immuziation? Interest rate r=8%. My approach: At the momment $t=0$: $$p_1\underbrace{Duration_{0}(bonds)}_{4}+p_2Duration_{0}(annuity)=\underbrace{Duration_{0}(liabillities)}_{9} $$ At the momment $t=1$: $$ p_1^{'}\underbrace{Duration_{1}(bonds)}_{3}+p_2^{'}Duration_{1}(annuity)= \underbrace{Duration_{1}(liabillities)}_{8} $$ Where $$ p_1=\frac{PV_{0}(bonds)}{PV_{0}(bond)+PV_{0}(annuity)}=\frac{PV_{0}(bonds)}{13594v^9}, \\ p_2=\frac{PV_{0}(annuity)}{PV_{0}(bond)+PV_{0}(annuity)}=\frac{PV_{0}(annuity)}{13594v^9}$$ $$ p_1^{'}=\frac{PV_{1}(bonds)}{PV_{1}(bond)+PV_{1}(annuity)}=\frac{PV_{1}(bonds)}{13594v^8}, \\ p_2^{'}=\frac{PV_{1}(annuity)}{PV_{1}(bond)+PV_{1}(annuity)}=\frac{PV_{1}(annuity)}{13594v^8}$$ For perpetual annuity $Duration_{0}(annuity)=Duration_{1}(annuity)=:D$. So we have simultaneous equations $$ (\star) \begin{cases} 4p_1+Dp_2=9 \\ 3p_1^{'}+Dp_2^{'}=8 \\ p_1+p_2=1 \\ p_1^{'}+p_2^{'}=1 \end{cases} $$ Thus $$ \begin{cases} 4\frac{PV_{0}(bonds)}{13594v^9}+D\frac{PV_{0}(annuity)}{13594v^9}=9 \\ 3\frac{PV_{1}(bonds)}{13594v^8}+D\frac{PV_{1}(annuity)}{13594v^8}=8 \\ \frac{PV_{0}(bonds)}{13594v^9}+\frac{PV_{0}(annuity)}{13594v^9}=1 \\ \frac{PV_{1}(bonds)}{13594v^8}+\frac{PV_{1}(annuity)}{13594v^8}=1 \end{cases} $$ And I cannot solve it and generally don't know if the approche is right. I'd be really appreciate it if somebody could help me out with that exercise. Edit: Duration for perpetual annuity is dependent only on interest rate: $D=\frac{\sum_{k=1}^{\infty} kv^kCF}{\sum_{k=1}^{\infty} v^kCF}=\frac{Ia{\infty}}{a_{\infty}}=\frac{\frac{1}{r}+\frac{1}{r^2}}{\frac{1}{r}}=1+\frac{1}{r}=\frac{1.08}{0.08}=13.5$ If we insert it to $(\star)$, then we get $p_2=\frac{5}{9.5}=0.526, p_2^{'}=\frac{5}{10.5}=0.476 $. So $p_2^{'}-p_2=-0.05$ and the right answer is $-0.0228$. Maybe somebody detects what is not ok here? In this solution I don't use information that the amount of liabillities is $13594$ so probably there is some other (right) solution.
Portfolio immunization in time
CC BY-SA 4.0
null
2023-05-13T13:20:14.637
2023-05-18T09:04:17.683
2023-05-18T09:04:17.683
848
67406
[ "duration" ]
75532
2
null
75510
2
null
A slightly different approach than @oronimbus. Hopefully, between both answers, you can accomplish your goal. The below function takes a Pandas Dataframe, `df`, with prices (not returns) in a column called `close`, does all the necessary calculations, and returns the number of drawdowns, the average depth of all the draws, and the average length of time (in days) to recover the draw (i.e., achieve a new high). On top of the information that the function returns, all the data needed for calculations are added to the dataframe as columns so that you can examine them and understand what I did to get to the output. ``` import pandas as pd import numpy as np def avg_dd(df, column='close'): df['simple_ret'] = df[column].pct_change().fillna(0) df['cum_ret'] = (1 + df['simple_ret']).cumprod() - 1 df['nav'] = ((1 + df['cum_ret']) * 100).fillna(100) df['hwm'] = df['nav'].cummax() df['dd'] = df['nav'] / df['hwm'] - 1 df['hwm_idx'] = (df['nav'] .expanding(min_periods=1) .apply(lambda x: x.argmax()) .fillna(0) .astype(int)) df['dd_length'] = (df['hwm_idx'] - df['hwm_idx'].shift(1) - 1).fillna(0) df['dd_length'] = df['dd_length'][df['dd_length'] > 5] df['dd_length'].fillna(0, inplace=True) dd_end_idx = df['hwm_idx'].loc[df['dd_length'] != 0] temp_dd_days = df['dd_length'].loc[df['dd_length'] != 0] dd_start_idx = dd_end_idx - temp_dd_days temp_dd = [min(df['dd'].loc[df.index[int(dd_start_idx[i])]: df.index[int(dd_end_idx[i])]]) for i in range(len(dd_end_idx))] num_dd = len(temp_dd) avg_dd = np.average(temp_dd) avg_dd_length = (df['dd_length'][df['dd_length'] > 0]).mean() return num_dd, avg_dd, avg_dd_length ``` The return of the function is a tuple with the information you are looking for. Happy to explain what is going on in the code if need be, but I think it's pretty easy to figure it out. Good luck!
null
CC BY-SA 4.0
null
2023-05-13T13:58:24.327
2023-05-13T13:58:24.327
null
null
26556
null
75533
2
null
74975
0
null
With respect, it looks like a problem with the way you're gathering the data. What you need to build is a data set recording the fill rate at each tick from the bid/ask, then you can find A and k. The intuition is that the fill rate must be decreasing in distance from the best prices because in order to fill at, say tick 10, you must have already filled tick 9. I don't think you can use candle data to do the calibration, you need high frequency data. Hope that helps, Cheers, Paul.
null
CC BY-SA 4.0
null
2023-05-13T15:09:27.373
2023-05-13T15:09:27.373
null
null
3457
null
75534
1
null
null
3
77
I was reading a textbook about finding the price of an option on a one-period binomial model. The textbook way of doing it is to replicate the option with cash and stock for $t=T$, and then calculate the portfolio value at $t=0$. For example: $r=0.1, s=10, u=1.2, d=0.7, p_u=0.2,p_d=0.8$ we want to find the price of a call option with strike $K=9$ we can replicate this option with $x$ units of cash and $y$ units of stock. we get: $x(1+r)+yS_u=(S_u-K)^+, x(1+r)+yS_d=(S_d-K)^+$ plug in numbers, we get: $1.1x+12y=3, 1.1x+7y=0$ solution is: $y=\frac{3}{5}, x=\frac{21}{5.5}$ portfolio value at $t=0$ is $x+ys=\frac{12}{5.5}$ if we use CAPM to solve this problem, we firstly find the market return (treat the stock as the market) $\mu_m$ and market variance $\sigma_m^2$ Let the call option price at $t=0$ be $c$ $\mu_m=s(p_uu+p_dd)-s=10(1.2*0.2+0.8*0.7)-10=-2$ $\sigma_m^2=p_u(s(u-1)-\mu_m)^2+p_d(s(d-1)-\mu_m)^2=0.2*(2-(-2))^2+0.8*(-3-(-2))^2=3.2+0.8=4$ Now calculate the covariance of market and call option $cov_{m,c}$ firstly calculate the average return of the option $\mu_c$: $\mu_c=p_u(su-K)^++p_d(sd-K)^+-c=0.2(12-9)+0.8*0-c=0.6-c$ and then $cov_{m,c}=p_u(s(u-1)-\mu_m)((su-K)^+-c-\mu_c)+p_d(s(d-1)-\mu_m)((sd-K)^+-c-\mu_c) =0.2(2-(-2))((3-c)-(0.6-c))+0.8(-3-(-2))(0-c-(0.6-c))=0.2*4*2.4+0.8*(-1)*(-0.6)=2.4$ we get $\beta=\frac{cov_{m,c}}{\sigma_m^2}=0.6$ now use CAPM formula we get: $\mu_c=rc+\beta(\mu_m-rs)$ which is: $0.6-c=0.1c+0.6(-2-0.1*10)$ solve for $c$ and we get $c=\frac{12}{5.5}$ So we get the same result using CAPM.
Using CAPM to find the price of an option
CC BY-SA 4.0
null
2023-05-13T15:21:04.253
2023-05-13T15:22:57.593
2023-05-13T15:22:57.593
57147
57147
[ "option-pricing", "capm" ]
75535
2
null
49804
0
null
Here's something that made it click for me, maybe it will help someone else. Dates again for convenience: > Mar 4, Mar 5, Mar 8 Jun 4, Jun 7 Sep 1, Sep 5 Dec 1, Dec 2, Dec 8 --- One key thing to notice is to break down the first statement by "Me": > I don't know A's birthday, C doesn't know either. It conveys two things: - C doesn't know the real birthday - I don't know the real birthday Let's break each down: - C doesn't know the real birthday: If this is true, then it can't be a birthday with a unique date, so it's not June 7th or Dec 2nd. - I don't know the real birthday: How can I know for sure that C doesn't know the real birthday? Well I know the month. If I were told the month were December, then the options are Dec 1, Dec 2, Dec 8. In which case, C could know the real birthday, which violates the bullet point above. So I can't have been told December is the month. Same logic goes for June. --- Once we've eliminated June and December, if C knows it, then it has to be a unique day in March or September, so it has to be Sep 1st.
null
CC BY-SA 4.0
null
2023-05-13T17:22:13.027
2023-05-13T17:22:13.027
null
null
38135
null
75536
1
null
null
0
90
I am currently trading forex on capital.com but since they don't offer tick data I am considering to fetch the tick data from polygon.io and then trade on capital.com. Though I am wondering if this is a bad idea? I am still new to quantitive trading so I would love some advice.
Is it okay to fetch market data from platform A and trade on platform B? (Forex)
CC BY-SA 4.0
null
2023-05-13T19:59:59.500
2023-05-13T19:59:59.500
null
null
67409
[ "fx", "market-data" ]
75537
1
null
null
0
64
Just noticed after upgrading to the most recent version of Quantlib Python that the class ql.SabrSwaptionVolCube is now available. This is a very useful class in that it behaves in very much the same way as the now deprecated ql.SwaptionVolCube1 class and takes the same inputs (swaption ATM vol matrix, strike spreads, vol spreads ...etc) along with $\alpha,\beta,\nu,\rho$ vectors to return a SABR vol cube. Now, to calibrate the ATM vols $\sigma_{N,ATM}$, we can take method 2 of [this](https://www.mathworks.com/help/fininst/calibrating-sabr-model-for-normal-volatilities-using-analytic-pricer.html) approach: i.e. fix $\alpha$ and set $\beta=0$ (assuming a Bachelier distribution of the forwards) and recursively perturb the $\alpha$ parameter by find the (smallest positive) root of the cubic polynomial $$\frac{β(β−2)T}{ 24F^{(2−2β)}}α^3 +\frac{ρβνT}{4F^{(1−β)}}α^2+(1+\frac{2−3ρ^2}{24}ν^2T)α−σ_{N,ATM}F^{−β}=0$$ via calibrating the $\nu,\rho$. However, this obviously works for the option expiries and swap tenors specified in the skew matrix (what the class refers to as "sparse parameters"). It does not calibrate $\sigma_{N,ATM}$ for expires and tenors not given in the skew matrix (what the class refers to as "dense parameters"). So, for example, let's say I input skew data for the subset of expiries 1m,3m,1y,5y,10y,30y on the subset of swap tenors 2y,5y,10y,30y then the above approach will return the correctly calibrated $\sigma_{N,ATM}$ for, say, 3m10y but not 3m15y (even though the ATM swaption vol matrix being supplied has the 15y tail). My question is how does one achieve the ATM calibration for those tenors not in the skew matrix? One approach is to include every tail and expiry in the skew matrix from the whole ATM vol matrix but this is impractical as skew data is sparsely available in the market (that's the whole point of using SABR!). Ideally, what is it to be achieved is for the skew to be derived from the sparse parameters while the ATM vol coming from the ATM vol matrix. In any case, one approach that does work is to supply the full set of expiries and tenors and fill the strike spreads with the "appropriate values" from sparse data (i.e. if the strike ATM+1% entry for the 3m10y is say +0.15% normals from the market then the same value should be entered for 3m15Y). If any Quantlib experts out there have a more intelligent solution to the above, or if I'm missing something in my approach, please do respond.
SabrSwaptionVolCube Class in Quantilib Python
CC BY-SA 4.0
null
2023-05-14T09:20:46.087
2023-05-14T09:20:46.087
null
null
35980
[ "quantlib", "swaption", "sabr" ]
75539
1
null
null
2
60
In this paper, [Labordère](https://deliverypdf.ssrn.com/delivery.php?ID=949097084122118080122097012069087091032013032038091020066027106007021091010006125022049012018120037051097086107093125094121101009005008019049003103030105071106103050046045083030001100102065070010114109127092089126110012099107086112098073027101024003&EXT=pdf&INDEX=TRUE), the author computes a probabilistic representation of the the vanna/vomma(volga) break-even levels. He mentions that they can be used to calibrate LSV models to historical vanna/vomma BE. Does anyone know how can you do that? and why would you calibrate your model to these BE? Thanks in advance.
Calibration of LSV models to vanna/volga break-even
CC BY-SA 4.0
null
2023-05-14T18:02:27.430
2023-05-14T18:02:27.430
null
null
62047
[ "equities", "stochastic-volatility", "calibration", "local-volatility", "vanna-volga" ]
75540
1
75545
null
6
218
In derivative pricing models, we often use the letter $q$ to designate the dividend yield i.e.: $$\textrm{d}S_t=S_t((\mu-q) \textrm{d}t+\sigma\textrm{d}W_t)$$ for the price process $S$ of the stock. Is there some historical reason for this notation convention? Or maybe a reference to some term I don’t know? I can imagine the letter $d$ might be avoided due to the use of differentials $\textrm{d}t$. This specific parameter catches my attention because it is the only common model parameter or function which uses a Roman letter without direct reference to a common term such as $r$ for rate, $S$ for stock, $V$ for value or $C$ for call.
Why do we use the letter $q$ for dividends?
CC BY-SA 4.0
null
2023-05-14T20:31:08.707
2023-05-16T20:42:12.297
2023-05-14T20:36:02.730
20454
20454
[ "reference-request", "dividends", "notation" ]
75541
1
null
null
4
149
[![enter image description here][1]][1]A call price is bounded when $\sigma\sqrt{T}$ goes to $0$ and $\infty $ by: $$C_{inf} = e^{-rT}[F-K] \leq C \leq C_{sup}=S $$ Now a simple rearrangement of Black-Scholes formula gives: $$ C = N_1S - e^{-rT}N_2K = e^{-rT}N_2[F-K] + [N_1-N_2]S$$ $$ = \frac{N_2}{N_1}N_1e^{-rT}[F-K] + [1-\frac{N_2}{N_1}]N_1S$$ $$ = \frac{N_2}{N_1}C_\inf + [1-\frac{N_2}{N_1}]N_1C_\sup$$ $$ = \frac{N_2}{N_1}\widehat{C_\inf} + [1-\frac{N_2}{N_1}]\widehat{C_\sup}$$ $$ = \alpha\widehat{C_\inf} + [1-\alpha]\widehat{C_\sup}$$ The last formula reads as a convex combination of two functions which are the extreme values of a call contract. It is actually a probabilistic convex combination as the coefficient $\alpha = \frac{N_2}{N_1} $ is not constant but depends on the state $(S,\sigma,T-t)$: $$\alpha = \alpha(s,\sigma,ttm)$$ - $\widehat{C_\inf} = N_1e^{-rT}[F-K] $ is the "forward intrinsic value" or your expected payoff. - $\widehat{C_\sup} = N_1S = \Delta S$ is simply your delta hedging portfolio value. Being a convex combination between these two bounds ensures the no-arbitrage of the price. - The occurence of the ratio $\frac{N_2}{N_1}$ is interesting. Still looking for a good interpretation... For now: $$\alpha = \frac{N_2}{N_1} = \mathbb{P}(C_T > 0|S_t,\sigma,ttm) = \frac{\mathbb{P}(S_T > K)}{N_1}$$ As the $ttm \rightarrow 0$ , ITM $N_1 \rightarrow 1$ then $$\mathbb{P}(C_T > 0) \rightarrow \mathbb{P}(S_T > K) = N_2 $$ So the interpolation bounds (dark blue and green curves in the picture) and the interpolation coefficient $\alpha$ change dynamically with the contract $\Delta = N_1$ which is the degree of the contract's linearity (w.r.t to the underlying) that dependens on the state $(s, \sigma,ttm) $. Example: -For the forward contract (lower bound) $C^{inf} = e^{-rT}[F-K]$, $\Delta = N_1 = 1 $: $$\mathbb{P}(C^{inf}_T > 0) = \mathbb{P}(S_T - K > 0) = \mathbb{P}(S_T > K) = N_2 = \frac{N_2}{N_1} $$ Any corrections or additional interpretations are much appreciated ? Edit: I found an interpretation to the ratio $\frac{N_2}{N_1}$ in terms of the elasticity $e$. The elsaticity refers to the ratio of the option return covariance to the stock return covariance: $$e = \frac{\frac{\Delta C}{C}} { \frac{\Delta S}{S} } $$ Delta-hedging in BS gives for small change: $$\Delta C = \frac{\partial C}{\partial S}\Delta S = N_1\Delta S$$ Hence, $$ e = \frac{N_1 S}{C} = \frac{N_1 S}{N_1S - Ke^{-rT}N_2} $$ $$= 1 + \frac{Ke^{-rT}N_2}{N_1S - Ke^{-rT}N_2} $$ $$= 1 + \frac{\frac{N_2}{N_1}}{\frac{S}{Ke^{-rT}} - \frac{N_2}{N_1}}$$ Finally, $$\frac{N_2}{N_1} = \frac{F}{K} \frac{e -1}{e} $$ Note that the option's elasticity $e \geq e_{stock}=1$. Also, from CAPM perspective: $$\beta_{option} = e \cdot \beta_{stock}$$ [1]: [https://i.stack.imgur.com/pELh1.png](https://i.stack.imgur.com/pELh1.png)
Black-Scholes formula is a (probabilistic) convex combination
CC BY-SA 4.0
null
2023-05-14T23:10:24.650
2023-06-04T01:16:04.580
2023-06-04T01:16:04.580
60070
60070
[ "options", "black-scholes" ]
75542
1
null
null
2
35
Suppose $W$ is a $\mathbb{P}$-Brownian motion and the process $S$ follows a geometric $\mathbb{P}$-Brownian motion model with respect to $W$. $S$ is given by \begin{equation} dS(t) = S(t)\big((\mu - r)dt + \sigma dW(t)\big) \end{equation} where $\mu, r, \sigma \in \mathbb{R}$. Thanks to Girsanov theorem, we know that $S$ is a $\mathbb{Q}$-martingale for $\mathbb{Q}$ defined by \begin{equation} \frac{d\mathbb{Q}}{d\mathbb{P}} = \mathcal{E}\Big(\int \lambda dW\Big) \end{equation} where $\lambda = \frac{\mu - r}{\sigma}$. This is the case where the Brownian motion and $S$ are one dimensional. What can be said about multivariate case ? I tried something : When $W$ is $d$-dimensional and $S$ is $n$-dimensional, with $n, d \geq 1$, $\forall i \in \{1,...,n\}$, $S$ is given by \begin{equation} dS_i(t) = S_i(t)\big((\mu_i -r)dt + \sum_{i = 1}^d \sigma_{ij}dW_j(t)\big) \end{equation} If we define $\lambda$ as \begin{equation} \lambda_{ij} = \frac{\mu_i - r}{\sigma_{ij}d} \end{equation} Maybe somehow we could use Girsanov theorem in a similar way as before to find a probability measure such that $S$ is a martingale ? I cannot find a way to do this properly. The problem is that with this approach, for each $i \leq n$, we only find a measure under which $S_i$ is a martingale, but I am not sure how to deduce a result for $S$. edit : Is it enough to take each $i \leq n$ individually, find a measure $\mathbb{Q}_i$ under which $S_i$ is a martingale by using the approach I mentioned before, and then say that $S$ is a martingale under the product measure $\otimes_{i \leq n} \mathbb{Q}_i$ ? According to [this](https://math.stackexchange.com/questions/85616/definition-of-multivariate-martingale), a sufficient condition for a multivariate process to be a martingale is if each component separately is a martingale. Thank you in advance for your help.
multivariate geometric brownian motion equivalent martingale measure
CC BY-SA 4.0
null
2023-05-15T13:14:34.167
2023-05-15T14:09:30.807
2023-05-15T14:09:30.807
60817
60817
[ "stochastic-processes", "stochastic-calculus", "brownian-motion", "martingale", "geometric-brownian" ]
75543
2
null
71003
0
null
Firstly -- you need a value for each period during your trading window to calculate the Sharpe ratio. Looking at your data, you have three options IMO: - Put 0.0 values for the days that you didn't trade - Track your total portfolio performance, per day. - Track the performance of each position, per day, and reconcile at the end `#1` would be a little odd if you're concern is individual position tracking. `#2` would be the easiest, but wouldn't really take into account individual trades which might run against what you're trying to achieve here. `#3` sounds great, but would be more complex to reconcile and report over a period of time. For example, if you calculate the SR for a position you held for 3 days, then again for a position you held for 5 days, the you effectively have 2 SR values over two differing periods (3 and 5 days, respectively.) So how do you reconcile that? Straight mean? Weighted mean? What about the days you didn't trade? Are you negative on those days if the market moves up?
null
CC BY-SA 4.0
null
2023-05-15T16:12:59.477
2023-05-15T16:12:59.477
null
null
57437
null
75545
2
null
75540
5
null
It seems it didn't take long before the case of continuous dividends was considered in the literature. Robert Merton's 1973 paper "Theory of Rational Option Pricing" considers the case of dividends in section 7 and denotes it with a $D$. In "An Overview of Contingent Claims Pricing" from 1988 by John Hull and Alan White $q$ is used and the 2nd edition of Hull's "Options, futures, and other derivatives" (can't find the first one) does as well. Unfortunately, they do not cite the source of this notation and I didn't find any interesting leads in the references section of the paper. The notation didn't immediately catch on: In the 1990 paper by David C. Leonard, Michael E. Solt "On using the Black-Scholes Model to Value Warrants" the dividend yield is still $d$. The 1996 "American Options on Dividend-Paying Assets" by Mark Broadie, Jérôme Detemple used $\delta$. In "American options with stochastic dividends and volatility: A nonparametric investigation" by "Mark Broadie, Jérôme Detemple, Eric Ghysels, Olivier Torres" from 2000 it's the same. To answer the question why: Because Hull is doing it for a very long now time and many people read his book. I made a guess why Hull choose $q$ in the comments: > If I had to guess it’s because $q$ is alphabetically close to $r$ In the 2nd edition Hull introduces $q$ as below. A few pages back $r$ is introduced in a similar way and this 'closeness' of definitions might have suggested this convention to Hull for didactic purposes. A more far fetched explanation is that the source is the discussion in "An Overview of Contingent Claims Pricing" also shown below. The subtraction of little $q$ from $r$ is necessary to have the correct drift under the $\mathbb{Q}$ measure. This is even more far fetched since Hull and White don't discuss risk neutral valuation in terms of the risk neutral $\mathbb{Q}$ measure. [](https://i.stack.imgur.com/cFSlS.png) [](https://i.stack.imgur.com/3Fry4.png)
null
CC BY-SA 4.0
null
2023-05-15T18:13:29.603
2023-05-16T20:42:12.297
2023-05-16T20:42:12.297
848
848
null
75549
1
null
null
1
55
I'm new to applying the Black Litterman model. One point that is troubling me, from a multi asset point of view, is the market portfolio as the starting point for the model. Correct me if I'm wrong but this is based on the entire estimated allocation of the market to a particular asset. So for a stock, this would be its market cap. A bond would be its entire issuance. However, when you apply it from a multi asset perspective it becomes more complicated. For example, one approach I've seen applied is if using an etf tracking an equity index the weight within the market portfolio would be based the entire market cap of the equity index within the context of the entire investable universe of assets. Not just the AUM of the etf. Certainly more difficult to attain. But what about alternatives? Take an allocation to a long/short equity fund that invests in the same region as the etf. Surely you would be double counting here if you base the market portfolio's weight on the fund's aum? And this problem goes for many different alternatives or other strategies. What if you're allocation uses futures for example? So, can the market portfolio be replaced in some way? My initial thoughts are that the aum of the specific instrument such as a fund or etf as the market portfolio's allocation is the correct approach. This is what the market has allocated to that specific instrument. But it can't be applied if you hold direct instruments as well. Perhaps there is a better way?
Black Litterman Model: can Market Weights be replaced?
CC BY-SA 4.0
null
2023-05-16T08:37:41.340
2023-05-16T09:39:59.450
2023-05-16T09:39:59.450
16148
62804
[ "black-litterman" ]
75550
1
null
null
2
45
Given a set of calibrated SABR parameters, what is the approach to get the implied vol for a given delta ? thanks
Closed form solution to get implied vol from delta with SABR model
CC BY-SA 4.0
null
2023-05-16T13:26:51.950
2023-05-16T13:26:51.950
null
null
67442
[ "implied-volatility", "sabr" ]
75551
1
null
null
0
73
XCS (cross currency swap) can be: - Float vs float #1 - Fixed vs fixed #2 - Float vs fixed #3 > #2 can be constructed with 2 fixed vs float irs and 1 xccy basis swap #1 > #3 can be constructed with 1 irs and #1 An FX swap is equivalent to #2 risk wise (I think?) Is it true that: #1 has fx risk, interest rate risk for each of the two currencies involved, and xccy basis risk? If so, does that mean that #2 and fx swaps don’t have any interest risk? Only xccy basis and fx risk? Or if not true then perhaps #1 only has fx risk and xccy basis risk? And then it follows that #2 and fx swaps do have, on top of fx and xccy basis risk, interest risk for each currency?
XCS and FX swaps: market risks
CC BY-SA 4.0
null
2023-05-16T14:50:07.053
2023-05-16T20:27:59.063
null
null
57427
[ "cross-currency-basis" ]
75552
1
null
null
0
50
In the Longstaff & Schawartz article they condition on using In-The-Money (ITM) paths only for the regression. The reason for this is to obtain more accurate results and also reduce the computational burden. However, if I blindly implement this approach and the consider an American Put option that is far Out-of-The-Money (OTM) then it may very well happen that all of the paths are OTM and thus not part of the regression af they fail to satisfy the ITM-condition. What are some goods methods to handle (far) OTM options using LSMC? I could check if a certain percentage or number of paths are ITM before apply the condition but perhaps someone here can share some insight as of how to handle these details in practice.
LSMC for Out of The Money paths
CC BY-SA 4.0
null
2023-05-16T17:00:03.960
2023-05-16T17:00:03.960
null
null
56812
[ "option-pricing", "monte-carlo", "regression", "american-options", "longstaff-schwartz" ]
75554
2
null
75551
0
null
Broadly you are on the right track. I have used Python's `rateslib` to make these calculations. Setup the curves and the risk engine ``` from rateslib import * usdusd = Curve({dt(2023, 1, 1): 1.0, dt(2024, 1, 1): 0.95}, id="usd") eureur = Curve({dt(2023, 1, 1): 1.0, dt(2024, 1, 1): 0.975}, id="eur") eurusd = Curve({dt(2023, 1, 1): 1.0, dt(2024, 1, 1): 0.975}, id="eurusd") instruments = [ IRS(dt(2023, 1, 1), "1Y", "A", currency="usd", curves="usd"), IRS(dt(2023, 1, 1), "1Y", "A", currency="eur", curves="eur"), XCS(dt(2023, 1, 1), "1Y", "Q", currency="eur", leg2_currency="usd", curves=["eur", "eurusd", "usd", "usd"]) ] fxr = FXRates({"eurusd": 1.10}, settlement=dt(2023, 1, 3)) fxf = FXForwards( fx_rates=fxr, fx_curves={ "usdusd": usdusd, "eureur": eureur, "eurusd": eurusd, } ) solver = Solver( curves=[usdusd, eureur, eurusd], instruments=instruments, s = [5.0, 2.5, -10], instrument_labels=["USD 1Y", "EUR 1Y", "XCS 1Y"], fx=fxf, id="XCCY" ) ``` Now I build each of your instruments and query the delta. First a mark-to-market cross currency swap: ``` xcs = XCS( dt(2023, 1, 1), "1Y", "Q", notional=100e6, currency="eur", leg2_currency="usd", fx_fixings=1.10, curves=["eur", "eurusd", "usd", "usd"] ) xcs.delta(solver=solver, base="eur").style.format("{:.0f}") ``` [](https://i.stack.imgur.com/6mofU.png) Second a fixed-fixed cross currency swap. ``` ffxcs = FixedFixedXCS( dt(2023, 1, 1), "1Y", "A", notional=100e6, currency="eur", fixed_rate=2.41, leg2_currency="usd", leg2_fixed_rate=5.01, fx_fixings=1.10, curves=["eur", "eurusd", "usd", "usd"] ) ffxcs.delta(solver=solver, base="eur").style.format("{:.0f}") ``` [](https://i.stack.imgur.com/PvUMY.png) And finally an FXswap (I have had to iterate the fixed rate input to get a mid-market FXSwap) ``` fxs = FXSwap( dt(2023, 1, 1), "1y", "Z", notional=100e6, currency="eur", leg2_currency="usd", leg2_fixed_rate=2.54, fx_fixing=1.10, curves=["eur", "eurusd", "usd", "usd"], ) fxs.delta(solver=solver, base="eur").style.format("{:.0f}") ``` [](https://i.stack.imgur.com/r8pAC.png) The FXSwap has 216 EUR per pip risk to the EURUSD FX rate because each leg of the FXswap has an NPV in each local currency when executed at mid market.
null
CC BY-SA 4.0
null
2023-05-16T20:27:59.063
2023-05-16T20:27:59.063
null
null
29443
null
75555
2
null
15976
0
null
Many good answers, also worth mentioning you can simply reverse engineer it based on VIX, VVIX and rates historical data, and plug this into a pricing model
null
CC BY-SA 4.0
null
2023-05-16T23:23:49.733
2023-05-16T23:23:49.733
null
null
47192
null
75556
1
null
null
0
31
Building an API that integrates with backtrader and QuantConnect to run some backtests. I'm passing in machine generated data and I just want to make sure I'm not sending them too much stuff. so what data does Backtrader and QuantConnect collect from users?
Does Backtrader and QuantConnect collect any data?
CC BY-SA 4.0
null
2023-05-17T00:26:06.100
2023-05-17T00:26:06.100
null
null
67036
[ "backtesting", "simulations", "general" ]
75557
1
null
null
0
54
Say we’re selling a TRS on a local MSCI index denominated in USD and delta hedge it by buying eq futures on that local index denominated in local currency (non USD). Let’s also ignore repo/div/ir risk and only look at fx and funding. One option would be to sell an fx spot (sell usd, buy local currency) replicating the proper fixing (WMCO for MSCI indices) on trade date to hedge the fx risk and then enter an fx swap for the maturity of the swap where we borrow USD and lend the local currency in order to flatten our cash balance during the swap life. At the swap expiry we would do an fx spot opposite to what we did on trade date. Question is: why could we not just sell an fx forward instead of an fx spot (on trade date and on expiry date) coupled with an fx swap? If we could indeed do the hedge with just an fx forward, what maturity should we choose for the fx forward?
Hedging of FX combo trade
CC BY-SA 4.0
null
2023-05-17T01:51:58.227
2023-05-17T08:59:14.647
2023-05-17T08:59:14.647
26556
57427
[ "fx" ]
75558
1
null
null
0
36
As stated in the title, who first derived the formula $$ \text{Varswap strike} = \int_{\mathbb R} I^2(d_2) n(d_2) \, \mathrm d d_2 $$ where $d_2$ is the Black-Scholes quantity $$ d_2 = \frac{ \log S_t/K}{I\sqrt{T-t}} - \frac{I\sqrt{T-t}}{2} $$ and $n(d_2)$ is the standard normal density function? The earliest reference I could find is to a GS paper by Morokoff et al., Risk management of volatility and variance swaps, Firmwide Risk Quantititave Modelling Notes, Goldman Sachs, 1999. Is this then the source, i.e. the formula can be attributed to Morokoff?
Origin of the formula Varswap strike $= \int_{\mathbb R} I^2(d_2) n(d_2)\, \mathrm d d_2$
CC BY-SA 4.0
null
2023-05-17T09:27:45.203
2023-05-17T09:36:49.977
2023-05-17T09:36:49.977
65759
65759
[ "variance-swap" ]
75559
1
75563
null
1
131
There are two stocks: $S_t$ and $P_t$ $$dS_t = S_t(\mu dt + \sigma dB_t)$$ $$dP_t = P_t((\mu + \varepsilon) dt + \sigma dB_t)$$ Is there any risk-neutral measure? My thoughts are pretty simple: $μ$ is for the physical measure, so there's no risk-neutral measure. Please shed light on this question.
Is there a risk-neutral measure if there are two stocks with different drift terms?
CC BY-SA 4.0
null
2023-05-17T11:39:30.017
2023-05-18T09:03:20.737
2023-05-18T09:03:20.737
848
67212
[ "quant-trading-strategies", "risk-neutral-measure" ]