Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
75702 | 1 | 75752 | null | 4 | 94 | Consider the Gamma-Ornstein-Uhlenbeck process defined in the way Barndorff-Nielsen does, but consider a different long running mean $b$ which may be bigger than zero:
$$dX(t) = \eta(b - X(t))dt + dZ(t)$$
Where
$$Z(t) = \sum_{n=1}^{N(t)}J_n$$
With $N(t)$ being Poisson($\lambda t$) and $J_n$ iid Exponential($k$)
This has solution
$$X(t) = b + \mathrm{e}^{-\eta (t-t_0)}\left[X(t_0) - b\right] + \sum_{n=N(t_0)}^{N(t)}\mathrm{e}^{-\eta (t-\tau_n)}J_n$$
Where $\tau_n$ are the jump times of $N(t)$
What is the characteristic function of the stochastic part (the compound Poisson part)? Let's call it $\bar{Z}$.
Additionally, there's a different process defined as
$$dY(t) = \eta(b - Y(t))dt + dZ(\eta t)$$
Which is said to have always the same distribution for every time point (as seen in Schoutens 2003). What is the characteristic function in this case? I assume it's supposed to be a gamma characteristic function, but I don't see what parameters it must have; is it really independent of $t_0$ and $t$?
I have been reading a lot of related papers on the subject, but they only work with the Laplace transform. I have calculated a version of the characteristic function of $\bar{Z}(t)|\bar{Z}(t_0)$ using the theorem
$$\Phi_{\bar{Z}(t)|\bar{Z}(t_0)}(u) = \Phi_{\int_{t_0}^t f(s) dZ(s)}(u) = \exp\left[ \int_{t_0}^t \Psi_{Z(s)}(uf(s))ds\right]$$
Where we have the characteristic exponent of the compound Poisson process given by
$$\Psi_{Z(t)} = -\lambda \left( 1 - \frac{k}{k-iu} \right)$$
And the function $f(s)$ in our case is $\mathrm{e}^{-\eta(t-s)}$.
I obtained a huge formula, so I am wary of my results. Here is the result I obtained:
$$\Phi_{\bar{Z}(t)|\bar{Z}(t_0)}(u) = \exp\left\{-\lambda\int_{t_0}^t \left(1 - \frac{k}{k - iu\mathrm{e}^{-\eta(t-s)}} \right)ds\right\}$$
$$\Phi_{\bar{Z}(t)|\bar{Z}(t_0)}(u) = \exp \left\{\frac{\lambda\log\left[\mathrm{e}^{2 \eta t_0}u^2+k^2\mathrm{e}^{2 \eta t}\right]}{2 \eta} - \frac{\lambda\log\left[\mathrm{e}^{2 \eta t}u^2+k^2\mathrm{e}^{2 \eta t}\right]}{2 \eta}- \frac{i \lambda}{\eta}\arctan\left(\frac{\mathrm{e}^{-\eta (t-t_0)} u}{k}\right) + \frac{i \lambda}{\eta} \arctan\left(\frac{u}{k}\right) \right\}$$
With some cleaning up,
$$\Phi_{\bar{Z}(t)|\bar{Z}(t_0)}(u) = \left( \frac{k^2 + u^2 \mathrm{e}^{-2\eta(t-t_0)}}{k^2 + u^2}\right)^{\frac{\lambda}{2\eta}}\exp\left\{ i \frac{\lambda}{\eta} \arctan\left( \frac{ku\left(1 - \mathrm{e}^{\eta(t-t_0)} \right)}{k^2 + u^2 \mathrm{e}^{\eta(t-t_0)}} \right) \right\}$$
And the limit when $t \rightarrow +\infty $
$$\Phi_{\bar{Z}(+\infty)|\bar{Z}(t_0)}(u) = \left( \frac{k^2}{k^2 + u^2}\right)^{\frac{\lambda}{2\eta}}\exp\left\{ i \frac{\lambda}{\eta} \arctan\left( \frac{u}{k}\right) \right\}$$
And from here I do not know how to manipulate it so that the stationary limit is the gamma characteristic function (assuming it is).
Furthermore, it contradicts a result I found in another paper([https://arxiv.org/pdf/2003.08810v1.pdf](https://arxiv.org/pdf/2003.08810v1.pdf) Eq.5), which omits the calculations and refers to another source that only has the Laplace transform ([https://core.ac.uk/download/pdf/96685.pdf](https://core.ac.uk/download/pdf/96685.pdf) Example 3.4.3). That paper says the characteristic function is:
$$\Phi_{\int_{t_0}^t f(s) dZ(s)}(u) = \left( \frac{k - iu\mathrm{e}^{-\eta(t-t_0)}}{k - iu}\right)^{\lambda / \eta}$$
However, some numerical tests I have done seem to indicate this might be wrong.
EDIT: I'm starting to suspect that I could get the polar form that I obtained to eventually end up giving the same result as in the paper, or the other way around, rewrite the one from the paper in this polar form. I think my numerical errors come from the pole at $\bar{Z}(t)=0$, as when I am numerically inverting the characteristic function to obtain the pdf I can't seem to separate the "non-singular" part of the characteristic function from the full one and add the pole at the end.
| Characteristic function of Gamma-OU process | CC BY-SA 4.0 | null | 2023-05-29T17:19:48.320 | 2023-06-02T18:46:13.570 | 2023-05-30T16:48:02.620 | 66523 | 66523 | [
"volatility",
"stochastic-processes",
"stochastic-calculus",
"stochastic-volatility",
"ornstein-uhlenbeck"
]
|
75703 | 1 | 75720 | null | 2 | 54 | I've been using QuantLib for constructing a yield curve and pricing a bond. I am wondering if I'm using the correct method to create my yield term structure (`yts`) for the pricing process.
Here is the reproducible example :
```
import QuantLib as ql
import math
calculation_date = ql.Date().todaysDate()
ql.Settings.instance().evaluationDate = calculation_date
yts = ql.RelinkableYieldTermStructureHandle()
index = ql.OvernightIndex("USD Overnight Index", 0, ql.USDCurrency(), ql.UnitedStates(ql.UnitedStates.Settlement), ql.Actual360(),yts)
swaps = {
ql.Period("1W"): 0.05064,
ql.Period("2W"): 0.05067,
ql.Period("3W"): 0.05072,
ql.Period("1M"): 0.051021000000000004,
ql.Period("2M"): 0.051391,
ql.Period("3M"): 0.051745,
ql.Period("4M"): 0.05194,
ql.Period("5M"): 0.051980000000000005,
ql.Period("6M"): 0.051820000000000005,
ql.Period("7M"): 0.051584000000000005,
ql.Period("8M"): 0.05131,
ql.Period("9M"): 0.050924,
ql.Period("10M"): 0.050603999999999996,
ql.Period("11M"): 0.050121,
ql.Period("12M"): 0.049550000000000004,
ql.Period("18M"): 0.04558500000000001,
ql.Period("2Y"): 0.042630999999999995,
ql.Period("3Y"): 0.038952,
ql.Period("4Y"): 0.036976,
ql.Period("5Y"): 0.035919,
ql.Period("6Y"): 0.03535,
ql.Period("7Y"): 0.034998,
ql.Period("8Y"): 0.034808,
ql.Period("9Y"): 0.034738000000000005,
ql.Period("10Y"): 0.034712,
ql.Period("12Y"): 0.034801,
ql.Period("15Y"): 0.034923,
ql.Period("20Y"): 0.034662,
ql.Period("25Y"): 0.03375,
ql.Period("30Y"): 0.032826,
ql.Period("40Y"): 0.030834999999999998,
ql.Period("50Y"): 0.02896
}
rate_helpers = []
for tenor, rate in swaps.items():
helper = ql.OISRateHelper(2, tenor, ql.QuoteHandle(ql.SimpleQuote(rate)), index)
rate_helpers.append(helper)
curve = ql.PiecewiseFlatForward(calculation_date, rate_helpers, ql.Actual360())
yts.linkTo(curve)
index = index.clone(yts)
engine = ql.DiscountingSwapEngine(yts)
print("maturity | market | model | zero rate | discount factor | present value")
for tenor, rate in swaps.items():
ois_swap = ql.MakeOIS(tenor, index, rate)
pv = ois_swap.NPV()
fair_rate = ois_swap.fairRate()
maturity_date = ois_swap.maturityDate()
discount_factor = curve.discount(maturity_date)
zero_rate = curve.zeroRate(maturity_date, ql.Actual365Fixed() , ql.Continuous).rate()
print(f" {tenor} | {rate*100:.6f} | {fair_rate*100:.6f} | {zero_rate*100:.6f} | {discount_factor:.6f} | {pv:.6f}")
issue_date = ql.Date(12,1,2022)
maturity_date = ql.Date(12,1,2027)
coupon_frequency = ql.Period(ql.Semiannual)
calendar = ql.UnitedStates(ql.UnitedStates.GovernmentBond)
date_generation = ql.DateGeneration.Backward
coupon_rate = 4.550000/100
day_count = ql.Thirty360(ql.Thirty360.USA)
spread = ql.SimpleQuote(89.965 / 10000.0)
schedule = ql.Schedule( issue_date,
maturity_date,
coupon_frequency,
calendar,
ql.Unadjusted,
ql.Unadjusted,
date_generation,
False)
bond = ql.FixedRateBond(2, 100, schedule, [coupon_rate], day_count)
spread_handle = ql.QuoteHandle(spread)
spreaded_curve = ql.ZeroSpreadedTermStructure(yts, spread_handle)
spreaded_curve_handle = ql.YieldTermStructureHandle(spreaded_curve)
bond.setPricingEngine(ql.DiscountingBondEngine(spreaded_curve_handle))
print(f"NPV {bond.NPV()} vs dirty price {bond.dirtyPrice()} - clean price {bond.cleanPrice()}")
```
I'm using the yield term structure (`yts`) linked to the curve (`curve`) constructed using `ql.PiecewiseFlatForward`.
I'm wondering if it is correct to use the yts which links to the forward curve to price the bond.
Or, do I need to build a zero-coupon curve for pricing? If so, how would I build and use this zero-coupon curve?
I've noticed that QuantLib allows the conversion from forward rates to zero rates using the `zeroRate()` function. Is this function enough to derive the correct zero-coupon rates from the forward rates for bond pricing, or is a more explicit construction of a zero-coupon curve necessary?
Any guidance or examples would be greatly appreciated. Thanks!
| Using Yield Term Structure for Bond Pricing in QuantLib: Is a Zero-Coupon Curve Necessary? | CC BY-SA 4.0 | null | 2023-05-30T10:12:51.713 | 2023-05-31T11:16:28.840 | 2023-05-30T11:56:58.410 | 42110 | 42110 | [
"programming",
"fixed-income",
"quantlib",
"bootstrap"
]
|
75704 | 2 | null | 75047 | 0 | null | I think I figured it out. If:
- $\Sigma$ is the factor VCV matrix (m assets by m assets)
- $f$ is the factor exposures (n factors by m assets)
Then you can re-create the security level VCV matrix with just:
$$ f\Sigma f^T $$
Similarly if:
- $\mu$ is the factor returns (n factors by m assets)
You can recreate the security level returns by just:
$$ \mu f^T $$
Now that you have security level risk and return parameters you can apply the technique mentioned above in the linked paper.
| null | CC BY-SA 4.0 | null | 2023-05-30T12:42:37.437 | 2023-05-30T12:58:46.570 | 2023-05-30T12:58:46.570 | 55044 | 55044 | null |
75705 | 1 | null | null | 2 | 33 | I'm currently reading this paper "[Explicit SABR Calibration through Simple Expansions](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2467231)" by Fabien Le Floc'h and Gary Kennedy and in the 3rd part when they introduce Andersen & Brotherton-Ratcliffe expansions, I can't manage to get the same results as they get with the parameters they use when the strike is different from the forward value. It seems like I have an issue with the value of my Γ(K) and/or the D(f). Can someone show the complete demonstration with the numerical values to get the value they get for these particular cases ?
| How to calculate D(f) in the new lognormal and normal formula in this document : "Explicit SABR Calibration through Simple Expansions"? | CC BY-SA 4.0 | null | 2023-05-30T13:31:41.380 | 2023-05-31T18:14:30.963 | 2023-05-31T18:14:30.963 | 848 | 67616 | [
"sabr",
"calculation"
]
|
75706 | 1 | null | null | 0 | 25 | I have a dataset that has one independent variable but has 12 dependent variables that I'm trying to model in a basic spreadsheet program. The dataset is not big, about less than 500 samples so I think the basic spreadsheet program is sufficient. What i'm trying to do is to model a move in an index to a group of related stocks. So for example if there is a large move in SP500 in 1 minute, i'd like to see, say, 12 individual stocks that I selected how their price has changed in the same 1 minute.
I know a basic spreadsheet program can do multivariate regression for multiple independent variables but only for one dependent or outcome variable. However, what I am trying to do is the reverse where I have one independent variable outcome for multiple dependent variables
| Excel or excel addin for Multivariate Multiple Regression | CC BY-SA 4.0 | null | 2023-05-30T14:36:54.270 | 2023-05-30T16:16:08.470 | 2023-05-30T16:16:08.470 | 67617 | 67617 | [
"regression",
"models",
"excel",
"multivariate"
]
|
75707 | 2 | null | 75683 | 2 | null | There is no doubt that the the Black & Scholes formula for the European call
$$\tag{1}
C(S_t,t)=S_t\Phi(d_1)-e^{-r(T-t)}K\Phi(d_2)
$$
where
$$\tag{2}
d_{1,2}=\frac{\log(S_t/K)+r(T-t)\pm\sigma^2(T-t)/2}{\sigma\sqrt{T-t}}
$$
satisfies the Black & Scholes PDE
$$\tag{3}
\partial_tC+\frac{1}{2}\sigma^2S^2\partial_{SS}C+rS\partial_SC-rC=0\,.
$$
Proof. Take (1) and differentiate. Use
$$\tag{4}
S_t\Phi'(d_1)-e^{-r(T-t)}K\Phi'(d_2)=0\,.
$$
$$\tag*{$\Box$}
\quad
$$
This proof also shows
$$\tag{5}
\partial_SC(S_t,t)=\Phi(d_1)\,.
$$
The trading strategy to hold
$$\tag{6}
\alpha_t:=\Phi(d_1)=\partial_S C(S_t,t)
$$
units of the stock $S_t$ and
$$\tag{7}
\beta_t:=\frac{C(S_t,t)-\alpha _tS_t}{e^{rt}}=e^{-rT}K\Phi(d_2)
$$
units of the money market account $e^{rt}$ is self-financing.
Proof. By (1) the portfolio value $\alpha_t S_t+\beta_t e^{rt}$ is equal to the call price. By the widely accepted definition of the trading strategy to be self-financing we therefore have to verify that the call price satisfies
$$\tag{8}
dC=\alpha_t\,dS_t+\beta_t \,d(e^{rt})=\alpha_t\,dS_t+r\beta_t\,e^{rt}\,dt\,.
$$
By Ito's formula and the Black-Scholes PDE (3) and using
$$\tag{9}
dS_t=\sigma S_t\,dW_t+rS_t\,dt
$$ we have
\begin{align}
dC&\stackrel{\text{Ito}}{=}\partial_t C\,dt+\underbrace{\partial_SC}_{\alpha_t}\,dS_t+\frac{1}{2}\partial_{SS}C\,d\langle
S\rangle_t\\
&\stackrel{(9)}{=}\partial_t C\,dt+\alpha_t\,dS_t+\frac{1}{2}\sigma^2S_t^2\partial_{SS}C\,dt\\
&\stackrel{(3)}{=}rC\,dt-r\alpha_tS_t\,dt+\alpha_t\,dS_t\\
&\stackrel{(7)}=r\beta_t\,e^{rt}\,dt+\alpha_t\,dS_t\,.
\end{align}
$$\tag*{$\Box$}
\quad
$$
| null | CC BY-SA 4.0 | null | 2023-05-30T14:51:58.037 | 2023-05-30T14:51:58.037 | null | null | 58786 | null |
75708 | 1 | null | null | 2 | 80 | I was wondering if there exist options or other derivatives that do not have a known closed-form analytic expression (i.e., some sort of Black-Scholes PDE) and are usually priced using Monte Carlo methods, but that could have such an expression? Specifically, I am wondering if it is possible to analyze the price data of some derivative as a function of time and underlying price to discover a PDE using something like symbolic regression or other ML-based techniques?
| Derivatives without analytic expressions? | CC BY-SA 4.0 | null | 2023-05-30T15:55:10.530 | 2023-05-30T16:44:16.133 | null | null | 60241 | [
"options",
"black-scholes",
"monte-carlo",
"derivatives",
"machine-learning"
]
|
75709 | 1 | null | null | 0 | 25 | I have a very broad question. I would like to ask why financial markets thrive or crash. After all, aren't the prices of shares just made up on the stock market, independently of the fundamental values of fundamental analysis associated with the companies that have been placed on the stock market through initial public offerings (IPO)s?
Thanks.
| Why do financial markets thrive or crash | CC BY-SA 4.0 | null | 2023-05-30T16:14:00.043 | 2023-05-30T16:14:00.043 | null | null | 37989 | [
"market-data",
"market-making",
"market",
"market-model"
]
|
75710 | 1 | null | null | 3 | 81 | I have a silly question regarding complex calculus, in which I'm a bit rusty at the moment. In F. Rouah's book The Heston Model and Its Extensions in Matlab and C# the following appears:
>
Now evaluate the inner integral in Equation (3.32), as was done in (3.14). This produces
$$\begin{align}
\Pi_1 & = \dfrac{1}{2\pi}\int_{-\infty}^{\infty} \varphi_2(u) \dfrac{e^{−i(u+i)l}}{
i(u + i)}
du − \dfrac{1}{2\pi} \text{lim}_{R\to\infty}\int_{-\infty}^{\infty} \varphi_2(u) \dfrac{e^{−i(u+i)R}}{
i(u + i)} du\\
& = I_1 − I_2. \end{align}$$
The second integral is a complex integral with a pole at $u = −i$. The residue
there is, therefore, $\varphi_2(−i)/i$. Applying the Residue Theorem, we obtain
$$ I_2 = \text{lim}_{R\to\infty} \dfrac{1}{2\pi} \Bigg[ -2 \pi i \times \dfrac{ \varphi_2 (−i)}{i} \Bigg] = -\varphi_2(-i).$$
I think I see the point on solving the integral using the residue theorem and how it works. However, the question that rises is: Why can't I just do the same for the first integral?
Thanks
---
Edit: a screenshot of that page, for completion
[](https://i.stack.imgur.com/25iI9.png)
| Complex Integral in Rouah's Heston book | CC BY-SA 4.0 | null | 2023-05-30T16:24:07.787 | 2023-05-31T09:13:03.377 | 2023-05-31T09:13:03.377 | 59915 | 59915 | [
"heston",
"rouah",
"complex"
]
|
75711 | 2 | null | 75708 | 1 | null | A unilateral extinguisher is a simple example, as it is easy to describe.
Counterparties A and B have a portfolio of swaps between them, e.g. interest rate swaps or cross-currency swaps. For similicity, you can even consider just one swap. For simplicity, assume no margin, collateral, or netting agreements.
The portfolio has some mark to market value V.
If CDS-like credit event happens to credit C then: if V>v for some contractually specified strike v, then the portfolio "extinguishes", i.e. all the swaps in the portfolio are canceled, but some recovery may be paid out. Else the portfolio lives on.
In practice, the recovery could depend not only on V, but on the time and other non-trivial parameters.
It is easy to write a closed-form pricer for a zero-recovery bilateral extinguisher, which just extinguishes with no recovery irrespective of the value of V. But I am not aware of a methodology other than MC to price a unilateral one.
| null | CC BY-SA 4.0 | null | 2023-05-30T16:44:16.133 | 2023-05-30T16:44:16.133 | null | null | 36636 | null |
75712 | 1 | null | null | 3 | 42 | I was wondering if the representation by Duffie, Pan, and Singleton (2000) is already accounting for the little Heston trap. DPS represent their 'general' discounted characteristic function as:
$$
\begin{align}
\psi(u,(y,v),t,T) = exp(\alpha(u,T-t) + yu + \beta(u,T-t)v),
\end{align}
$$
where
$$
\begin{align}
\beta(\tau,u) &= -a\frac{1-\exp{(-\gamma\tau)}}{2\gamma-(b+\gamma)(1-\exp{(-\gamma\tau)})}\\
\alpha_{0}(\tau,u) &= -r\tau +(r-\xi)\tau u - \kappa\sigma\left(\frac{b+\gamma}{\sigma^{2}}\tau + \frac{2}{\sigma^{2}}\log\left(1 - \frac{b+\gamma}{2\gamma}\left(1-\exp{(-\gamma\tau)}\right)\right)\right)\\
\alpha(\tau,u) &= \alpha_{0}(\tau,u) - \bar{\lambda}\tau(1+\bar{\mu}u) + \bar{\lambda}\int^{\tau}_{0} \theta(u,\beta(s,u))ds,\\
a &= u(1-u),\\
b &= \sigma\rho u - \kappa,\\
\gamma &= \sqrt{b^{2} + a\sigma^{2}},\\
\bar{\lambda} &= \lambda_{y} + \lambda_{v} + \lambda_{c}.
\end{align}
$$
The transform $\theta(c_{1},c_{2})$ can be found in their paper. When comparing this discounted characteristic function to other variations of the original Heston characteristic function, they look quite different from each other. It is giving me a hard time to figure out if this representation already takes care of 'the little Heston trap'. In the case what would I need to change in here to handle the little Heston trap. Do I also need to change something in the integral?
| The little Heston Trap in DPS representation | CC BY-SA 4.0 | null | 2023-05-30T16:55:43.810 | 2023-05-30T16:55:43.810 | null | null | 65067 | [
"option-pricing",
"implied-volatility",
"heston",
"characteristic-function"
]
|
75713 | 1 | null | null | 0 | 59 | I would like to know what some sources of enthusiasm are for people working in finance.
Thanks.
| Working in finance and enthusiasm | CC BY-SA 4.0 | null | 2023-05-30T17:48:01.670 | 2023-05-30T17:48:01.670 | null | null | 37989 | [
"behavioral-finance"
]
|
75715 | 1 | null | null | 0 | 30 | What happens if a company with N shares has all of its shareholders wanting to sell the shares at the same time (but nobody is available to buy them).
This is a hypothetical scenario (after all, if I well understand, there are always sellers and buyers, somewhere, available, that are able to buy and sell simultaneously. There could still, I think, in a more likely hypothetical scenario, be a situation where there are sellers and buyers, but where there are many more sellers than buyers. How does a financial intermediary tackle this situation?).
So, how could the scenario I describe happen, and, what would happen, market wise?
Can you describe, in steps, and/or in detail, even with numbers or an example, what would happen in this case?
My understanding of finance is limited.
Thanks.
| What happens if all shareholders want to sell simultaneously, but there is nobody to buy | CC BY-SA 4.0 | null | 2023-05-31T02:06:38.630 | 2023-05-31T02:06:38.630 | null | null | 37989 | [
"finance",
"finance-mathematics",
"behavioral-finance"
]
|
75716 | 1 | null | null | 0 | 44 | Suppose I have a factor model that takes in unemployment and GDP as $X_1, X_2$ respectively in estimating the fair price of asset $Y$. Say I observe that the market price of $Y$ has deviated significantly from the estimate from my model.
My intuition says this signal cannot be used as a trading signal because while price of $Y$ is in dislocation, two of the factors are non-tradable. This means that market price of $Y$ could converge to fair price by way of movement in $X_1, X_2$ and it doesn't carry definitive information on the price direction of $Y$.
Is my interpretation correct or am I missing something? How are factor models actually used in practice?
Update: GDP and unemployment may have been a bad example, as they change infrequently. For the sake of argument, assume GDP and unemployment updates on second-to-second basis.
| Unhedged factor models in trading | CC BY-SA 4.0 | null | 2023-05-31T02:10:13.983 | 2023-05-31T02:35:09.427 | 2023-05-31T02:35:09.427 | 65980 | 65980 | [
"hedging",
"factor-models",
"factor-investing"
]
|
75717 | 1 | null | null | -2 | 26 | For a given stock, how do I know, attached to a stock, how many people holding the stock and willing to sell the stock, how many buyers who do not own the stock but are willing to purchase the stock, and how many happy owners who hold the stock but are not willing to sell the stock, exist, and what the percentage of these numbers are for any given share?
Does only a financial intermediary have access to this information (for instance, a financial intermediary working at a stock exchange, perhaps also others), have access to this information?
If so, what could his or her computer screen look like (in displaying such information)?
Can someone please post some information?
This could be useful into gaining insight into this issue and figuring out how to track this data, for reasoning, building models, and curiosity.
Thanks.
| Buyers, sellers, and happy owners | CC BY-SA 4.0 | null | 2023-05-31T02:59:45.687 | 2023-05-31T02:59:45.687 | null | null | 37989 | [
"equities",
"price",
"exchange",
"interactive-brokers",
"broker"
]
|
75718 | 2 | null | 75667 | 1 | null | A market is said to be complete if every derivative can be hedged. You can find a discussion of this, e.g. in Shreve's Stochastic Calculus for finance II (chapter 5). The idea is that then you can remove any source of uncertainty/risk (i.e. hedge) from the derivative in that market by taking some positions in the securities that belong to that market.
In that book he also gives an algebraic definition and discusses why that claim is mapped one to one to the uniqueness of the risk neutral measure.
| null | CC BY-SA 4.0 | null | 2023-05-31T07:47:01.027 | 2023-05-31T07:47:01.027 | null | null | 59915 | null |
75720 | 2 | null | 75703 | 2 | null | What curve to use has little to do with QuantLib itself and more to do with how you're modelling credit risk for your bond.
Bootstrapping over OIS rates, whether using QuantLib or not, gives you a risk-free rate, which can in fact also give you zero rates (by integrating the forwards) and, when using QuantLib, can in fact be passed to `DiscountingBondEngine`. But it's probably the wrong curve to use for discounting because it's risk-free.
What risky curve to use depends on the data you have available. You can fit one over quoted bond prices (see for example QuantLib's `FittedBondDiscountCurve`), or you can add a z-spread over the risk-free curve to add credit risk (with `ZeroSpreadedTermStructure` in QuantLib), or you can interpolate zero-rates or discount factors coming from some other desk (`ZeroCurve` or `DiscountCurve`, respectively).
It's a modelling choice, though, and depends on your context. Looking up how to use the corresponding class is probably the easy part.
| null | CC BY-SA 4.0 | null | 2023-05-31T11:16:28.840 | 2023-05-31T11:16:28.840 | null | null | 308 | null |
75722 | 1 | null | null | 0 | 65 | Let $\left(W_t\right)_{t\geq 0}$ be a Brownian motion with respect to filtration $\mathbb{F}=\left(\mathcal{F}_t\right)_{t\geq 0}$. Let $\left(\alpha_t\right)_{t\geq 0}$ be an $\mathbb{F}$-adapted stochastic process. What are necessary conditions to ensure that the stochastic integral $\int_0^t\alpha_sdW_s$ is a normal random variable?
| Sufficient conditions to ensure that stochastic integral is a normal variable | CC BY-SA 4.0 | null | 2023-05-31T14:01:53.667 | 2023-05-31T23:50:36.750 | 2023-05-31T23:50:36.750 | 44897 | 44897 | [
"stochastic-processes",
"stochastic-calculus",
"normal-distribution",
"stochastic"
]
|
75723 | 2 | null | 72080 | 0 | null | The gamma of an option as it approaches the expiry date becomes ill defined at $S_T = K$. However, if you approximate your option sitting at $K$ as a set of options with strikes ranging from $K-\Delta_K$ to $K+\Delta_K$, what you're doing is limiting the spike sitting at $K$ and replacing that whole gamma for a wider one sitting along all those strikes.
Here's just a toy example on how that looks when you replace the gamma of an option with strike $K$ to ten option of strikes from 95% to 105% and 1/10 notional. You can just plug in the BS formulas (for price and gamma) and reproduce it in python easily.
[](https://i.stack.imgur.com/hvCot.png)
| null | CC BY-SA 4.0 | null | 2023-05-31T14:30:49.270 | 2023-05-31T14:30:49.270 | null | null | 59915 | null |
75724 | 2 | null | 75722 | -1 | null | I'm going to be following Shreve's Stochastic Calculus for Finance II.
First, Lévy's theorem states that if $M(t)$ is a martingale relative to a filtration $\mathcal{F}(t)$, $M(0) = 0$ and accumulates quadratic variance as $[M, M] (t)=t$ for all $t\geq 0$, then $M(t)$ is a Brownian motion, and we agree that the increments of a Brownian motion are distributed according to a normal random variable.
We then need to prove that (i) $M(t)$ is a martingale, (ii) $M(0) = 0$.
(i) this can be proved as
$$
\begin{align}
I(t) & = \mathbb{E} \Bigg[ \int_0^t \alpha(s) dW_s \Bigg \vert \mathcal{F}(z) \Bigg] = \mathbb{E} \Bigg[ \int_z^t \alpha(s) dW_s + \int_0^z \alpha(s) dW_s \Bigg \vert \mathcal{F}(z)\Bigg] = \\
& = \mathbb{E} \Bigg[ \int_z^t \alpha(s) dW_s \Bigg \vert \mathcal{F}(z)\Bigg] + I(z),
\end{align}
$$
where you can see that the first integral vanishes by discretizing it and taking the expectation under the filtration $\mathcal{F}(z)$ is 0.
(ii) this follows straight forward from the integral itself.
Finally, the integral (let's call it $M(t)$ here) accumulates quadratic variance according to
$$ [M, M](t) = \int_0^t \alpha^2(s) ds =: t A(t)$$
which follows from Itô's isometry.
Therefore, what you have is that the rescaled version of your integral $A(t)^{-1/2} \int_0^t \alpha(s) dW_s $ is a Brownian motion. You can see that the only difference with respect to the normal random variable at this stage is the variance. As the scaling factor does not introduce any source of randomness, we can claim that the integral is a normal random variable with mean $0$ and variance $\int_0^t \alpha^2(s) ds$.
| null | CC BY-SA 4.0 | null | 2023-05-31T18:38:22.300 | 2023-05-31T18:44:05.040 | 2023-05-31T18:44:05.040 | 59915 | 59915 | null |
75725 | 1 | null | null | 2 | 41 | I am investigating various versions of nested and nonnested Fama and French factor models. Performance of the models is compared on the basis of Squared Sharpe Ratios. [Bariallas et al. (2020, JFQA)](https://www.cambridge.org/core/journals/journal-of-financial-and-quantitative-analysis/article/model-comparison-with-sharpe-ratios/7AAB79D48CC55E0AD1BC6C8AABD6F5D9) propose a test that does only require the factors as inputs and does not rely on test assets. However, I can not quite get my head around how Squared Sharpe Ratios are calculated in this case. Are they calculated for each factor individually and then summed up? This approach seems flawed, and if any, this would only make sense if the models that are compared have the same number of factors.
| Squared Sharpe Ratio - Fama and French | CC BY-SA 4.0 | null | 2023-05-31T22:41:16.820 | 2023-06-01T12:36:16.560 | 2023-06-01T12:36:16.560 | 67640 | 67640 | [
"sharpe-ratio",
"fama-french"
]
|
75726 | 1 | null | null | 1 | 25 | I'm trying to get a solution for the foreign equity call struck in domestic currency, where the foreign equity in domestic currency is defined as $S=S^fX^\phi$ with $0<\phi<1$, instead of the normal $S=S^fX$ (See Bjork 2020 for the standard setting).
Here it would be incorrect to assume that $S$ has a drift of $r_d$ (domestic rf) under $\mathbb{Q}^d$, as we would totally disregard the $\phi$ parameter. Is it ok to assume that the $\mu_s$ resulting from an ito's lemma of $S=S^fX^\phi$ under $\mathbb{Q}^d$ is the risk-neutral drift of $S$?
Thanks in advance
| Foreign equity call struck in domestic currency | CC BY-SA 4.0 | null | 2023-06-01T01:09:26.740 | 2023-06-01T01:09:26.740 | null | null | 67642 | [
"option-pricing",
"black-scholes",
"fx",
"exchange",
"pricing-formulae"
]
|
75727 | 1 | null | null | 0 | 33 | how would you find out if gdp growth is priced in the stock prices?
In general, if I have a proprietary data set and i construct a cross sectional factor related to that dataset. How would i test if the factor is already priced in the dataset.
| How to find if a factor is priced? | CC BY-SA 4.0 | null | 2023-06-01T02:54:56.970 | 2023-06-02T03:51:46.997 | 2023-06-02T03:51:46.997 | 36938 | 36938 | [
"equities",
"factor-models"
]
|
75728 | 1 | null | null | 0 | 38 | In a Black-Scholes world, if you continuously delta-hedge an option, you're guaranteed to make almost $0 neglecting discreteness and higher-order delta terms. In the real world, institutions do use delta-hedging to create delta-neutral portfolios, to trade volatility or other variables. Since the real world isn't Black-Scholes, my questions are:
- To delta hedge, you need a delta value. Do real world institutions use the Black-Scholes delta?
- Since volatility is unobservable, what volatility do they use for this computation?
- How effective is the hedge, i.e. how close to 0 P&L does a delta-neutral portfolio achieve? I assume this is related to how much the volatility of the underlying changes.
| How effectively can you Delta hedge in practice? | CC BY-SA 4.0 | null | 2023-06-01T06:02:03.180 | 2023-06-01T06:02:03.180 | null | null | 66985 | [
"black-scholes",
"delta-hedging"
]
|
75729 | 1 | null | null | 2 | 22 | Roll's critique ([Roll, 1977](https://www.anderson.ucla.edu/documents/areas/fac/finance/1977-2.pdf)) can be summarized as follows (quoting [Wikipedia](https://en.wikipedia.org/wiki/Roll%27s_critique)):
- Mean-variance tautology: Any mean-variance efficient portfolio $R_{p}$ satisfies the CAPM equation exactly:
$$
E(R_{i})-R_{f}=\beta_{{ip}}[E(R_{p})-R_{f}].
$$
Mean-variance efficiency of the market portfolio is equivalent to the CAPM equation holding. This statement is a mathematical fact, requiring no model assumptions.
Given a proxy for the market portfolio, testing the CAPM equation is equivalent to testing mean-variance efficiency of the portfolio. The CAPM is tautological if the market is assumed to be mean-variance efficient.
- The market portfolio is unobservable: The market portfolio in practice would necessarily include every single possible available asset, including real estate, precious metals, stamp collections, jewelry, and anything with any worth. The returns on all possible investments opportunities are unobservable.
From statement 1, validity of the CAPM is equivalent to the market being mean-variance efficient with respect to all investment opportunities. Without observing all investment opportunities, it is not possible to test whether this portfolio, or indeed any portfolio, is mean-variance efficient. Consequently, it is not possible to test the CAPM.
The critique sounds quite devastating, thus my questions:
- Given Roll's critique, should we drop empirical tests of the CAPM?
If not, what can be concluded from an empirical test of the CAPM?
- Does the same or analogous critique apply to multifactor models?
I have tried to find an answer in Cochrane "Asset Pricing" (revised edition, 2005) or his lecture series on YouTube but have failed. Meanwhile, Bodie, Kane and Marcus "Investments" (12th edition, 2021) present Roll's critique and then just state the following: Given the impossibility of testing the CAPM directly, we can retreat to testing the APT.... I take this as giving up on testing the CAPM because of the critique. But researchers have continued testing asset pricing models (including the CAPM, I think) also after 1977, so there must be something to that...
| Testing asset pricing models with Roll's critique in mind | CC BY-SA 4.0 | null | 2023-06-01T07:09:08.610 | 2023-06-01T07:26:09.647 | 2023-06-01T07:26:09.647 | 19645 | 19645 | [
"asset-pricing",
"capm"
]
|
75730 | 2 | null | 53687 | 0 | null | I'm also wondering about the same issue. There is even an exercise(2.3) that requires readers to "Apply ETF tricks" "On DOLLAR BAR series of E-mini S&P 500 futures and Eurostoxx 50 futures". There're are few code implementations floating around on Github, however, all of them assume that the bars are already aligned.
I think the alignment should be done when generating the dollar bars, the volumes of all assets need to be considered in the process. For example, we could generate a dollar bar for each of the assets whenever the sum of asset volumes reaches a certain threshold.
| null | CC BY-SA 4.0 | null | 2023-06-01T08:38:10.457 | 2023-06-01T08:42:23.363 | 2023-06-01T08:42:23.363 | 67644 | 67644 | null |
75732 | 1 | null | null | 3 | 75 | I have the impression that asset pricing models such as the CAPM or Fama & French 3 factor model typically concern nominal rather than real (inflation-adjusted) prices/returns. If this is indeed so, why is that?
Here is my guess. In cross-sectional asset pricing, there is no inherent time dimension (that is why it is called cross sectional), so the concept of inflation is irrelevant. Yet the models are estimated on data from multiple periods, so the time dimension is present in the data.
Also, I suppose adjustment for inflation might not make a big difference when using daily data but it could become important when using monthly (or even lower frequency) data.
References to relevant texts would be appreciated.
Another question with a similar title but somewhat different content (more focus on continuous-time finance, risk-neutral measure and such) is [this one](https://quant.stackexchange.com/questions/38483/do-we-model-nominal-or-real-prices-of-assets).
| Nominal vs. real (inflation-adjusted) prices/returns in cross-sectional asset pricing | CC BY-SA 4.0 | null | 2023-06-01T09:23:19.320 | 2023-06-02T06:52:41.210 | 2023-06-02T06:52:41.210 | 19645 | 19645 | [
"asset-pricing",
"capm",
"reference-request",
"fama-french",
"inflation"
]
|
75733 | 1 | 75749 | null | 5 | 59 | I'm learning [QuantLib-Python](https://quantlib-python-docs.readthedocs.io/en/latest/index.html) and trying to replicate an Interest Rate Swap valuation on a custom curve constructed by passing lists of dates and discounting factors. Please see my code below
```
import QuantLib as ql
today = ql.Date(31, ql.May, 2023)
ql.Settings.instance().setEvaluationDate(today)
dates = [ql.Date(31, ql.May, 2023), ql.Date(28, ql.August, 2023), ql.Date(27, ql.November, 2023),
ql.Date(26, ql.February, 2024), ql.Date(27, ql.May, 2024), ql.Date(26, ql.May, 2025), ql.Date(26, ql.May, 2026)]
dfs = [1.00, 0.980626530240871, 0.961865563098355, 0.942995352345942, 0.923889577251464, 0.845638911735274, 0.770640534908645]
dayCounter = ql.ActualActual(ql.ActualActual.ISDA)
calendar = ql.UnitedStates()
curve = ql.DiscountCurve(dates, dfs, dayCounter, calendar)
curveHandle = ql.YieldTermStructureHandle(curve)
start = today
maturity = calendar.advance(start, ql.Period('1Y'))
fix_schedule = ql.MakeSchedule(start, maturity, ql.Period('1Y'))
float_schedule = ql.MakeSchedule(start, maturity, ql.Period('3M'))
customIndex = ql.IborIndex('index', ql.Period('3M'), 0, ql.USDCurrency(), ql.UnitedStates(), ql.ModifiedFollowing, True, dayCounter, curveHandle)
customIndex.addFixing(ql.Date(30, ql.May, 2023), 0.075)
notional = 100000000
swap = ql.VanillaSwap(ql.VanillaSwap.Payer, notional, fix_schedule, 0.0833, dayCounter,
float_schedule, customIndex, 0, dayCounter)
swap_engine = ql.DiscountingSwapEngine(curveHandle)
swap.setPricingEngine(swap_engine)
print(swap.NPV())
```
There is a possibility to print out undiscounted fixed and floating leg cashflows by
```
print("Net Present Value: {0}".format(swap.NPV()))
print()
print("Fixed leg cashflows:")
for i, cf in enumerate(swap.leg(0)):
print("%2d %-18s %10.2f"%(i+1, cf.date(), cf.amount()))
print()
print("Floating leg cashflows:")
for i, cf in enumerate(swap.leg(1)):
print("%2d %-18s %10.2f"%(i+1, cf.date(), cf.amount()))
```
The Net Present Value of a swap should be given by the difference of discounted floating and fixed leg cashflows. I tried to replicate calculations manually via
```
fwd1 = curve.forwardRate(ql.Date(31, 5, 2023), ql.Date(31, 8, 2023), dayCounter, ql.Continuous).rate()
df1 = curve.discount(ql.Date(31, 8, 2023))
tau1 = dayCounter.yearFraction(ql.Date(31, 5, 2023), ql.Date(31, 8, 2023))
cashflow1 = df1 * notional * fwd1 * tau1
print(cashflow1)
print()
fwd2 = curve.forwardRate(ql.Date(1, 9, 2023), ql.Date(30, 11, 2023), dayCounter, ql.Simple).rate()
df2 = curve.discount(ql.Date(30, 11, 2023))
tau2 = dayCounter.yearFraction(ql.Date(1, 9, 2023), ql.Date(30, 11, 2023))
cashflow2 = df2 * notional * fwd2 * tau2
print(cashflow2)
print()
fwd3 = curve.forwardRate(ql.Date(1, 12, 2023), ql.Date(29, 2, 2024), dayCounter, ql.Simple).rate()
df3 = curve.discount(ql.Date(29, 2, 2024))
tau3 = dayCounter.yearFraction(ql.Date(1, 12, 2023), ql.Date(29, 2, 2024))
cashflow3 = df3 * notional * fwd3 * tau3
print(cashflow3)
print()
fwd4 = curve.forwardRate(ql.Date(1, 3, 2024), ql.Date(31, 5, 2024), dayCounter, ql.Simple).rate()
df4 = curve.discount(ql.Date(31, 5, 2024))
tau4 = dayCounter.yearFraction(ql.Date(1, 3, 2024), ql.Date(31, 5, 2024))
cashflow4 = df4 * notional * fwd4 * tau4
print(cashflow4)
print()
floatCashFlow = cashflow1 + cashflow2 + cashflow3 + cashflow4
print(floatCashFlow)
print()
df = df4
fixRate = 0.0833
tau = dayCounter.yearFraction(ql.Date(31, 5, 2023), ql.Date(31, 5, 2024))
fixCashFlow = df * notional * fixRate * tau
print(fixCashFlow)
print()
valueSwap = floatCashFlow - fixCashFlow
print(valueSwap)
```
and got a staggeringly different NPV. Can anyone show me how to precisely arrive at a built-in NPV valuation using built-in methods for calling discounting factors and forward rates? I suppose the difference is hidden somewhere in payment schedules but I failed to match the numbers by playing around with dates.
| Replicating QuantLib plain vanilla Interest Rate Swap valuation | CC BY-SA 4.0 | null | 2023-06-01T10:05:56.767 | 2023-06-02T13:38:28.290 | null | null | 27119 | [
"programming",
"quantlib",
"swaps",
"interest-rate-swap"
]
|
75734 | 1 | null | null | 0 | 94 | I'm using this formula to price sovereign CDSs according to CDS Returns, Augustin, Saleh and Xu (2020), Journal of Economic Dynamics and Control.
[](https://i.stack.imgur.com/oOP4M.png)
As you can see, the highlighted part will become negative in the case the spread is lower than the coupon. I'm looking at 25bp coupon contracts, therefore prices often go negative.
How can I get around this?
| CDS Pricing formula leads to negative prices | CC BY-SA 4.0 | null | 2023-06-01T11:24:53.607 | 2023-06-01T18:11:26.577 | null | null | 57239 | [
"fixed-income",
"cds"
]
|
75735 | 2 | null | 75734 | 1 | null | If the coupon c on the fixed leg, is higher than the spread, then the buyer could find cheaper insurance in the market. So it's entirely reasonable that the price would be negative.
| null | CC BY-SA 4.0 | null | 2023-06-01T13:38:04.100 | 2023-06-01T18:11:26.577 | 2023-06-01T18:11:26.577 | 55385 | 55385 | null |
75736 | 1 | null | null | 0 | 26 | My aim is to determine whether teacher salaries in all 50 of the United States over between 2013 and 2023—adjusted for inflation and the cost of living—differ significantly. I would like to ask what the wisest approach might be to modify unadjusted teacher salary averages (there is one average for each state) to account for these effects.
Afterwards, I would like to graph these modified salaries for a few of these states and examine whether changes in revenue receipts within all schools in a particular state leads to a significant difference in average salaries.
I am open to your insight on how I might best tweak teachers’ salaries to account for these effects and things I ought to consider when graphing the relationship I’ve described. Please bear in mind that I am referring to information from the National Education Association, which sources from public schools.
Thank you!
| Accounting for differences in average teacher salaries, adjusted for inflation and the cost of living in each state | CC BY-SA 4.0 | null | 2023-06-01T14:09:34.717 | 2023-06-01T15:56:40.407 | 2023-06-01T15:56:40.407 | 67650 | 67650 | [
"econometrics",
"analysis"
]
|
75737 | 1 | 75741 | null | 0 | 96 | I have a conceptual question regarding zero-coupon bonds. Say a bond issuer has issued a bond for funding itself, this bond has been split by the issuer (for simplicity assuming issuer is the same as market maker).
The issuer sells the bond at a deep discount to the face value. This discount becomes the capital gains (profit) for the bond holder. The issuer has therefore two obligations:
- C-STRIPS in which the issuer is paying coupon payments
- P-STRIPS in which the issuer is paying the discount
Why would the the issuer undertake two obligations of coupon payments and discounted bond prices? If the bond was not stripped, the issuer would only have one obligation of coupon payments and would get the entire face value as a funding source. But by stripping the bond, the issuer has eroded its funding.
What is the rationale for the bond issuer to do so?
| Rationale for issuing zero coupon bonds | CC BY-SA 4.0 | null | 2023-06-01T14:40:10.227 | 2023-06-01T16:19:41.620 | null | null | 67651 | [
"interest-rates",
"zero-coupon"
]
|
75740 | 2 | null | 75737 | 2 | null | The rationale is that the bond issuer gets capital upfront with no periodic payment required. So depending on the use of the proceeds, it may be more beneficial to receive a lower amount upfront in exchange for the freedom from those coupon payments.
There may be also some investors that prefer zeros for various reasons, so if there is a demand for these bonds it would be natural to have some supply.
| null | CC BY-SA 4.0 | null | 2023-06-01T15:29:57.583 | 2023-06-01T15:29:57.583 | null | null | 20213 | null |
75741 | 2 | null | 75737 | 2 | null | The stripping does not affect the present value (to a first approximation), if the firm issued a 1 million coupon bond, and it was stripped, the value received from selling the coupon-strip plus the value from selling the principal-strip would be 1 million dollars. It is just a repackaging (minus investment bank fees , of course and possibly plus a small premium paid by eager strip buyers).
| null | CC BY-SA 4.0 | null | 2023-06-01T16:19:41.620 | 2023-06-01T16:19:41.620 | null | null | 16148 | null |
75742 | 1 | 75747 | null | 0 | 61 | I was reading [Smile Dynamics II](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1493302) by Lorenzo Bergomi. It is clear to me that on page 2
$$
V_t^{T_1,T_2}=\frac{(T_2-t)V^{T_2}_{t}-(T_1-t)V^{T_1}_{t}}{T_2-T_1}
$$
is the fair strike of a forward-starting variance swap that starts accumulating variance at $T_1>t$ and matures at $T_2>T_1$. However, I find it difficult to conceptualize quantity $\xi_t^T=V_t^{T,T}$. What does it really represent? And why is it a good idea to model a term structure of such quantities in $T$?
Another basic question I had is the following: if the dynamics of $\xi_t^T$ is postulated to be lognormal in the first formula of page 2, then how come we end up with a zero-mean Ornstein-Uhlenbeck process in eq. (2.1)?
| Smile Dynamics - forward variance | CC BY-SA 4.0 | null | 2023-06-01T23:13:57.433 | 2023-06-02T11:35:02.633 | null | null | 44897 | [
"stochastic-volatility",
"term-structure",
"variance-swap"
]
|
75743 | 2 | null | 75732 | 1 | null | You can use nominal or real returns in the CAPM or Fama-French model. Both models have expressions for excess returns. As inflation will adjust nominal returns in the same way for different assets in the cross section, inflation cancels out.
More concretely, if $r^i_t = \log(P^i_t) - \log(P^i_{t-1})$ are nominal returns of asset $i$, $\pi_t = \log(\text{CPI}_t)-\log(\text{CPI}_{t-1})$ is inflation, and $\nu^i_t = r^i_t - \pi_t$ are real returns of asset $i$, then $r^i_t - r^j_t = \nu^i_t - \nu^j_t$. Excess returns will be equal if you use nominal or real returns.
| null | CC BY-SA 4.0 | null | 2023-06-02T03:14:46.520 | 2023-06-02T03:14:46.520 | null | null | 66800 | null |
75744 | 1 | null | null | -1 | 28 | If one would like to make swap for XAU/USD for one-month period, how do you calculate swap points or forward price for XAU/USD to make swap agreement? Can anyone help me with this calculation by an example?
| How to calculate 1-month Buy/Sell swap points for XAU/USD by an example | CC BY-SA 4.0 | null | 2023-06-02T08:29:43.403 | 2023-06-02T12:51:34.213 | 2023-06-02T12:51:34.213 | 67658 | 67657 | [
"derivatives",
"swaps"
]
|
75745 | 1 | null | null | 0 | 10 | In [this question](https://quant.stackexchange.com/questions/74499), I asked whether it is better to use clustered or GMM-based standard errors for estimating and testing asset pricing models such as the CAPM. However, I then realized that I am not sure how to set up the panel data model needed for obtaining the clustered standard errors, because the $\beta$s are unobservable. I looked around and found a few threads referring to the panel data model: [1](https://quant.stackexchange.com/questions/35756), [2](https://quant.stackexchange.com/questions/36967), [3](https://quant.stackexchange.com/questions/32305). E.g. in [this answer](https://quant.stackexchange.com/questions/32305), Matthew Gunn writes:
>
A modern approach to consistently estimate standard errors might be to run the following panel regression and cluster by time $t$: $$ R_{it} - R^f_t = \gamma_0 + \gamma_1 \beta_i + \epsilon_{it}$$
However, $\beta_i$ is unobservable, so we cannot estimate the model using the usual techniques. I am not sure if the author actually meant $\beta_i$ or an estimate thereof, though. But if we substitute $\beta_i$ by its estimate $\hat\beta_i$ from an appropriate time series regression, we face the errors-in-variables problem (a.k.a. measurement error). This suggests the estimation results will be suboptimal, and clustering by time will not address that. Thus my question:
- How does one set up an appropriate panel data model for estimating (and later testing) the CAPM?
| How to set up a panel data model for estimating the CAPM? | CC BY-SA 4.0 | null | 2023-06-02T08:30:40.723 | 2023-06-02T08:30:40.723 | null | null | 19645 | [
"capm",
"estimation",
"paneldata"
]
|
75746 | 1 | null | null | 0 | 7 | With time-constant expected value of market's excess return, we can estimate the CAPM using GMM as follows. Equation $(12.23)$ in Cochrane ["Asset Pricing"](https://press.princeton.edu/books/hardcover/9780691121376/asset-pricing) (2005) section 12.2 (p. 241) says the moments are
$$
g_T(b) = \begin{bmatrix} E(R^e_t-a-\beta f_t) \\ E[(R^e_t-a-\beta f_t)f_t] \\ E(R^e-\beta \lambda) \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \tag{12.23}.
$$
where $R^e_t=(R^e_{1,t},\dots,R^e_{N,t})'$ is a vector of individual assets' excess returns, $\beta=(\beta_1,\dots,\beta_N)'$ is a vector of betas, $f_t$ is factor's excess return and $\lambda=E(f)$ is the expected value of the factor's excess return. The first two rows correspond to time series regressions for $N$ assets (one regression per asset), so there are actually $2N$ conditions. If I understand correctly, the third row corresponds to a cross-sectional regression of time-averaged returns:
$$
E_T(R^{ei})=\beta_i' \lambda+\alpha_i, \quad i=1,2,\dots,N. \tag{12.10}
$$
($(12.10)$ is specified for potentially many factors, but $(12.23)$ considers the simple case of a single factor, so vector $\beta'$ turns into scalar $\beta$, and the same holds for $\lambda$.)
Question: Could this be modified to allow for time-varying expectation of market's excess return ($\lambda_t$ in place of $\lambda$)? If so, how should the moment conditions be modified?
| GMM estimation of the CAPM allowing for time-varying expectation of market's excess return | CC BY-SA 4.0 | null | 2023-06-02T10:17:49.420 | 2023-06-02T10:17:49.420 | null | null | 19645 | [
"capm",
"estimation",
"gmm"
]
|
75747 | 2 | null | 75742 | 2 | null | Ok, so
$$
d\xi_t^T = \omega e^{-k(T-t)} \xi_t^T dW_t
$$
where $W$ is standard Brownian.
Then, just by applying Ito I hope you can see that
$$
\log \xi_t^T / \xi_0^T = \omega \int_0^t e^{-k(T-u)} dW_u - \frac12 \omega^2 \int_0^t e^{-2k(T-u)} du
$$
Now just write
$$
e^{-k(T-u)} = e^{-k(T-t)}e^{-k(t-u)}
$$
Then
$$
\log \xi_t^T / \xi_0^T = \omega e^{-k(T-t)} X_t - \frac12 \omega^2 e^{-2k(T-t)} E_0[X_t^2]
$$
with $X_t = \int_0^t e^{-k(t-u)} dW_u$ which is an O-U process.
| null | CC BY-SA 4.0 | null | 2023-06-02T11:26:19.913 | 2023-06-02T11:35:02.633 | 2023-06-02T11:35:02.633 | 65759 | 65759 | null |
75748 | 1 | null | null | 2 | 53 | I am not able to make quantlib match bloomber price, maybe I don't use the right parameters into `ql.Schedule` or `ql.FixedRateBond`
I have a price of : 90.012 instead of 93.508 [Huge spread between my result and bloomberg]
Below is the reproducible example :
```
import QuantLib as ql
todaysDate= ql.Date(2,6,2023)
ql.Settings.instance().setEvaluationDate(todaysDate)
# Market conventions
calendar = ql.UnitedStates(ql.UnitedStates.GovernmentBond)
settlementDays = 2
faceValue = 100
compounding = ql.Compounded
compoundingFrequency = ql.Semiannual
# Bond schedule setup
issueDate = ql.Date(15, 5, 2019)
maturityDate = ql.Date(15, 5, 2029)
tenor = ql.Period(ql.Semiannual)
schedule = ql.Schedule(issueDate, maturityDate, tenor, calendar,
ql.Unadjusted, ql.Unadjusted, ql.DateGeneration.Backward, False)
# Bond parameters
rate = 3.5/100
dayCount = ql.Thirty360(ql.Thirty360.BondBasis)
bond = ql.FixedRateBond(settlementDays, faceValue, schedule, [rate], dayCount)
# YTM calculation
yieldRate = 4.767131/100 # YTM
yieldDayCount = ql.Thirty360(ql.Thirty360.BondBasis)
yieldCompoundingFrequency = ql.Semiannual
cleanPrice = bond.cleanPrice(yieldRate, yieldDayCount, compounding,
yieldCompoundingFrequency, issueDate)
print("The clean price of the bond is: ", cleanPrice)
```
Bond DES and YASQ
[](https://i.stack.imgur.com/3zAtX.png)
[](https://i.stack.imgur.com/2QMEc.jpg)
Can someone help me to understand what I am doing wrong ?
Side Note :
I have seen [this answer](https://quant.stackexchange.com/a/68648/42110) but it seems that my issue is something else.
| Quantlib match clean price with bbg clean price : Huge gap between calculated price and bloomberg | CC BY-SA 4.0 | null | 2023-06-02T13:17:12.510 | 2023-06-02T15:18:55.290 | 2023-06-02T15:18:55.290 | 42110 | 42110 | [
"programming",
"quantlib"
]
|
75749 | 2 | null | 75733 | 2 | null | 1/ the cf.amount() attribute returns undiscounted cashflows not PVd ones
2/ your curve.forwardRate() call is returning a continuously compounded zero rate, use customIndex(fixing date) to get the relevant ibor fwd
| null | CC BY-SA 4.0 | null | 2023-06-02T13:38:28.290 | 2023-06-02T13:38:28.290 | null | null | 35980 | null |
75750 | 2 | null | 38483 | -1 | null | The process stated by Shreve is general for nominal or real $S(t)$. If $S(t)$ will be assumed to be nominal or real depends on the objectives of the researcher.
For example, let us say that the process is for an asset that has expected real returns equal to zero, but returns follow a Wiener process. An option to model $S(t)$ is
$dS(t)=\sigma(t)S(t)dW(t)$,
where $S(t)$ denotes real prices, that is, prices adjusted for inflation.
On the other hand, if $S(t)$ are nominal prices, but real expected returns are still zero, then the specification with $\alpha(t)=0$ does not work well as nominal prices should be adjusted for inflation to imply zero expected real returns. We can then write a more general specification
$dS(t)=\alpha(t)S(t)dt + \sigma(t)S(t)dW(t)$.
In this case, if $S(t)$ denote nominal prices, then $\alpha(t)$ will change to accomodate changes in nominal asset prices caused by changes in inflation. It is more general than this, $\alpha(t)$ can accomodate a component for real returns plus inflation adjustments.
In the end, if $S(t)$ denotes nominal or real prices is a modeling choice.
| null | CC BY-SA 4.0 | null | 2023-06-02T14:43:52.337 | 2023-06-02T19:20:03.163 | 2023-06-02T19:20:03.163 | 66800 | 66800 | null |
75751 | 1 | null | null | 0 | 13 | I need a fast package for backtesting in R. I'm going to optimize a lot so I'll be running my strategies many millions of times.
I know about packages like `quantstrat` or `SIT` but they are terribly slow for my purposes, I have one strategy, I'm not going to model 20 portfolios at the same time and such.
Which package can you recommend?
| what are the packages for effective backtesting in R | CC BY-SA 4.0 | null | 2023-06-02T18:26:47.773 | 2023-06-02T18:26:47.773 | null | null | 55240 | [
"programming",
"backtesting"
]
|
75752 | 2 | null | 75702 | 0 | null | I have confirmed the two forms of the characteristic functions are equivalent, it's just that I had obtained a polar form. Here is the deduction from the original form to mine (easier to show), first simplifying notation by writing the exponential term as $c$:
$$\Phi(u) = \left( \frac{k-iuc}{k-iu} \right)^{\lambda/\eta} = \left( \frac{\sqrt{k^2+(uc)^2}\mathrm{e}^{i \arctan \left(-uc/k\right)}}{\sqrt{k^2 + u^2}\mathrm{e}^{i \arctan \left(-u/k\right)}} \right)^{\lambda/\eta} = \left( \sqrt{\frac{k^2+(uc)^2}{k^2 + u^2}}\exp\left\{i \left[\arctan\left(-\frac{uc}{k}\right) - \arctan \left(-\frac{u}{k}\right)\right]\right\} \right)^{\lambda/\eta} = \left( \sqrt{\frac{k^2+(uc)^2}{k^2 + u^2}}\exp\left\{i \left[\arctan \left(\frac{-\frac{uc}{k}-(-\frac{u}{k})}{1+(-\frac{uc}{k})\cdot(-\frac{u}{k})}\right)\right]\right\} \right)^{\lambda/\eta} = \left( \sqrt{\frac{k^2+(uc)^2}{k^2 + u^2}}\exp\left\{i \left[\arctan \left(\frac{\frac{u}{k}(1-c)}{1+\frac{u^2 c}{k^2}}\right)\right]\right\} \right)^{\lambda/\eta} = \left( \sqrt{\frac{k^2+(uc)^2}{k^2 + u^2}}\exp\left\{i \left[\arctan \left(\frac{ku(1-c)}{k^2+ u^2 c}\right)\right]\right\} \right)^{\lambda/\eta}$$
And eventually
$$ \Phi(u) = \left(\frac{k^2+u^2 \mathrm{e}^{-2\eta(t-t_0)}}{k^2 + u^2}\right)^{\frac{\lambda}{2\eta}}\exp\left\{i \frac{\lambda}{\eta} \left[\arctan \left(\frac{ku(1-\mathrm{e}^{-\eta(t-t_0)})}{k^2+ u^2 \mathrm{e}^{-\eta(t-t_0)}}\right)\right]\right\}$$
I am missing the distribution for the stationary case, which was my other question, so that's open.
| null | CC BY-SA 4.0 | null | 2023-06-02T18:46:13.570 | 2023-06-02T18:46:13.570 | null | null | 66523 | null |
75753 | 1 | null | null | 0 | 62 | I'm trying to calculate the total return (in %) on a 9% coupon 20-year bond with the following assumptions:
- reinvestment rate of 6% annually (3% every six months)
- terminal yield of 12% (semiannual rate of 6%)
- face value of the bond is 1000
Assume I buy the bond at par and hold it for 10 years.
I keep getting 7.37% but the book I am using says it should be 6.6%.
```
=((1+(-((FV(0.06/2,20,45)+PV(0.12/2,20,45,1000))/(1000))^(1/(10*2))-1))^2)-1
```
In steps:
```
1. Total coupon payments plus interest on interest: -FV(0.06/2,20,45) -> 1209.167
2. Projected sale price at the end of 10 years: -PV(0.12/2,20,45,1000) -> 827.951
3. Add 1, 2: 1209.167+827.951 -> 2037.118
4. total present dollars/purchase price ^(1/h) -1 : (2037.118/1000)^(1/20)-1 -> 0.036217231
5. (semiannual rate^2) -1 : 1.036217231^2-1 -> 0.07374615
```
This is not homework help.
The above problem is one entry in the following matrix:
The values across the top are reinvestment rate. The Values along the left side are terminal yields. The entries are total returns. The matrix was taken from Common Sense on Mutual Funds by Jack Bogle, which is his "forecast" for bond returns in the 1990s. He calls it "Bond Market Total Return Matrix for the 1990s"
| | | | | | | | |
|||||||||
| |6% |7% |8% |9% |10% |11% |12% |
|12 |6.6% |7.0% |7.3% |7.7% |8.0% |8.4% |8.2% |
|11 |7.1 |7.4 |7.7 |8.1 |8.5 |8.5 |9.2 |
|10 |7.5 |7.8 |8.2 |8.5 |8.7 |9.3 |9.7 |
|9 |8.0 |8.3 |8.6 |9.0 |9.4 |9.9 |10.1 |
|8 |8.5 |8.8 |9.3 |9.5 |9.9 |10.2 |10.6 |
|7 |9.0 |9.6 |9.6 |10.0 |10.4 |10.7 |11.1 |
|6 |10.0 |9.8 |10.2 |10.5 |10.9 |11.3 |11.7 |
| Total Return on Bond | CC BY-SA 4.0 | null | 2023-06-02T18:52:35.833 | 2023-06-03T22:53:13.233 | 2023-06-03T22:53:13.233 | 67664 | 67664 | [
"bond",
"returns"
]
|
75754 | 1 | null | null | 0 | 28 | Is there a way to widen the 95% VaR by changing the distribution of a portfolio of stocks? When calculating 95% VaR of my portfolio using the holdings based approach (which requires the covariance matrix), more than 5% of the time it exceeds the upper/lower bounds. This is expected since in general the market has fatter tails.
What are typical ways to expand the 95% VaR so that it's actually closer to 5%, and that can be easily implemented.
The tools I have are simulation - I am open to simulate a distribution or a bunch of stock paths. Or just another quick computational method.
| Wider VaR for portfolio risk? | CC BY-SA 4.0 | null | 2023-06-02T23:20:03.527 | 2023-06-03T09:16:04.340 | 2023-06-03T09:16:04.340 | 19645 | 66191 | [
"portfolio-optimization",
"value-at-risk",
"modern-portfolio-theory"
]
|
75755 | 2 | null | 75754 | 0 | null | If I understand the question correctly, you have a covariance matrix, you assume that your market factors are normally distributed, you calculate VaR, and the VaR comes out "too small". You're looking for a way to incrase the VaR being calculated, that would pass muster with others who might review / challenge / validate your methodology.
I recently commented on some [ways to debug VaR being "too large"](https://quant.stackexchange.com/questions/75463/var-backtesting-reasons-for-over-and-underestimation-of-value-at-risk-estimate/75675#75675), and I will add another suggestion - just tweak the volatilities (with appropriate controls, of course).
For analysis, divide and conquer - use "component VaR" to disaggregate the VaR into smallest pieces for which you can also attribute the P&L to market factors.
This will tell you which of your market factors don't contribute "enough". Then increase their volatility in your covariance matrix. But make sure you have proper governance around this process.
By how much should you increase the volatilities? If you suspect that the problem is that you assume normal distribution and in reality there should be fatter tails, then you can try to calculate the historical kurtosis of the problematic factors to verify this, and to guestimate by how much to increase each volatility to compensate. Or, an inverse problem, solve for the volatility increases that would sufficiently increase the (component) VaR.
| null | CC BY-SA 4.0 | null | 2023-06-03T00:05:20.753 | 2023-06-03T00:05:20.753 | null | null | 36636 | null |
75756 | 1 | null | null | 0 | 34 | Why the higher the fixed rate of a swap is, the higher the DV01 will be?
| Higher coupon on interest rate swap has higher DV01 | CC BY-SA 4.0 | null | 2023-06-03T07:08:15.983 | 2023-06-03T08:30:57.010 | null | null | 67672 | [
"interest-rate-swap"
]
|
75757 | 2 | null | 75756 | 0 | null | It won't.
The par fixed rate $c$ of a fixed/float $N$-year spot starting IRS is
$$
c=\frac{1-df_N}{dv01}
$$
with $dv01=\sum_i^N \delta_i \cdot df_i$ where $df_i$ are discount factors and $\delta_i$ are day count fractions. Hence $dv01$ is essentially just the sum of discount factors.
The higher rates are the lower the $dv01$ because discount factors are inversely proportional to rates.
| null | CC BY-SA 4.0 | null | 2023-06-03T08:11:19.360 | 2023-06-03T08:30:57.010 | 2023-06-03T08:30:57.010 | 35980 | 35980 | null |
75758 | 1 | null | null | -3 | 18 | This is the first time that I write in this forum. I suppose many of you currently work in the financial industry: banks, hedge funds, fintechs... I would like to ask for suggestions to develop a project or a tool that can help traders in their daily lives. I am interested in working on a trading floor as a quant analyst when I finish my master´s degree and I want to show some projects during the interviews. I was thinking of something visual, like a UI or something similar, with some models incorporated. I am new in this world, so any help will be much appreciated. I know how to code in Python and C++, and also a little bit in JS. Thank you in advance.
| Suggestions for a project | CC BY-SA 4.0 | null | 2023-06-03T09:16:28.290 | 2023-06-03T09:16:28.290 | null | null | 62114 | [
"programming",
"interest-rates",
"fx",
"trading",
"trading-systems"
]
|
75759 | 1 | null | null | 0 | 16 | I'm a novice, so apologies for the trivial question. I've read many articles on WFA but I can't figure this out. After the IS optimization test (1000 random tests, rolling window, using verctorBT Pro), I end up with the following table of best performing parameters for each split:
```
split fast_sma slow_sma picks hd tot_ret
0 16 44 3 13 0.595573
1 17 23 4 9 0.303525
2 21 34 4 12 0.143957
3 24 39 3 19 0.082345
4 14 33 3 11 0.160979
5 9 63 2 16 0.138730
6 23 22 2 12 0.115002
7 9 33 3 19 0.193508
8 22 47 3 17 0.127916
9 8 24 3 7 0.212058
10 10 60 2 6 0.201872
11 24 41 2 17 0.351140
12 19 29 3 9 0.455465
```
Now, I'm not sure of the next steps: Shall I test `16-44-3-13` on all the 13 OOS splits? And then test all the other combinations as well and see which performs better overall? Or shall I test only the best ones?
Or should I take an average of all of them?
Then, I'll end up with a similar table after the OOS test. What to do with the final results?
| In Walk Forward Analysis, I'm confused on how to handle the OOS part correctly | CC BY-SA 4.0 | null | 2023-06-03T14:01:58.080 | 2023-06-03T14:01:58.080 | null | null | 38252 | [
"backtesting"
]
|
75760 | 1 | null | null | 0 | 5 | I am creating a RAROC/FTP model for different loan types on a credit union balance sheet.
I am stuck on how to calculate the funding benefit of equity that would be netted out of my cost of funds benchmark. An example below shows the funding benefit @ 0.17%. Does anyone know where that may be sourced from or what calc method is being used to arrive at that result?
[](https://i.stack.imgur.com/Zcgth.png)
| How to calculate the Funding Benefit of Equity for a profitability or FTP calculator for a credit union | CC BY-SA 4.0 | null | 2023-06-03T21:13:17.213 | 2023-06-03T21:13:17.213 | null | null | 67679 | [
"ftp"
]
|
75761 | 1 | null | null | 0 | 4 | Given the following SDE: $$ d\psi_t = \rho dt + \mu \psi_t dX_t$$, where $G(t) = \rho t$, $\rho = \frac{1}{T}$ $\psi_0 =0$, $T=1$, $\mu > 0$ and $X_t$ is a standard Brownian Motion (assume we know one specific path of $\psi_t$). Further, $E$ denote the mathematical expectation and $1$ the indicator function.
The goal is to find some function $a(t)$ that can be obtained by solving an equation by backward induction using a partition $0\leq t_0 < t_1 \dots < t_n = T$ of $[0, T]$ given the 'starting' value $a(t_n) = a(T) = 0$.
If we now define the mesh by $\Delta = t_n - t_{n-1}$ the above mentioned equation is in the following form:
---
- First step of backward induction: $\quad t = t_{n-1}$
$$ E_{a(t_{n-1})} \left[ 1_{ \\\{ \psi_{\Delta} < a(t_n) \\\} } \psi_{\Delta} \right] = 0 \quad \implies \quad E_{a(t_{n-1})} \left[ 1_{ \\\{ \psi_{\Delta} < 0 \\\} } \psi_{\Delta} \right] = 0 $$
This equation should be solvable using Monte-Carlo to simulate values for $\psi_{\Delta}$ with $\psi_0 = a(t_{n-1})$, compute the average of $\left( 1_{ \\\{ \psi_{i} < 0 \\\} } \psi_{i} \right)$ and solve for $a(t_{n-1})$. (How?)
As an alternative it should be possible to derive an explicit formula for the transitional density of $\psi$ and also being able to solve for $a(t_{n-1})$. (Again: How?)
---
- Second step, as we know $a(t_{n-1})$: $\quad t = t_{n-2}$
\begin{align}
& E_{a(t_{n-2})} \left[ 1_{ \\\{ \psi_{2\Delta} < 0 \\\} } \psi_{2\Delta} \right] \\\\ + & E_{a(t_{n-2})} \left[ 1_{ \\\{ \psi_{\Delta} < a(t_{n-1}) \\\} } \right] + E_{a(t_{n-2})} \left[ 1_{ \\\{ \psi_{\Delta} < a(t_{n-1}) \\\} } \psi_{\Delta} \right] = 0
\end{align}
By solving this again we are supposed to get the value for $a(t_{n-2})$ and we continue expanding the equation to get values for $a(t_{n-3}), \dots, a(t_0)$, e.g.:
---
- Third step, as we know $a(t_{n-2})$: $\quad t = t_{n-3}$
\begin{align}
& E_{a(t_{n-3})} \left[ 1_{ \\\{ \psi_{3\Delta} < 0 \\\} } \psi_{3\Delta} \right] + \\\\ & E_{a(t_{n-3})} \left[ 1_{ \\\{ \psi_{2\Delta} < a(t_{n-1}) \\\} } \right] + E_{a(t_{n-3})} \left[ 1_{ \\\{ \psi_{2\Delta} < a(t_{n-1}) \\\} } \psi_{\Delta} \right] + \\\\ & E_{a(t_{n-3})} \left[ 1_{ \\\{ \psi_{\Delta} < a(t_{n-2}) \\\} } \right] + E_{a(t_{n-3})} \left[ 1_{ \\\{ \psi_{\Delta} < a(t_{n-2}) \\\} } \psi_{\Delta} \right] = 0
\end{align}
---
I struggle in how to derive the correct values $a(t)$ for every point of the partition as I don't know how to solve the expanding equation (neither for $t = t_{n-1}$ nor for even smaller values of $t$ as the equation doesn't get handier). There were signs that point towards Monte-Carlo and/or transitional density of $\psi$ but yet I failed to figure out how to apply either method. Maybe anyone can point out?
| Backward induction: equation including expected values of stochastic process | CC BY-SA 4.0 | null | 2023-06-03T23:14:36.807 | 2023-06-03T23:14:36.807 | null | null | 67666 | [
"stochastic-processes",
"monte-carlo",
"stochastic-integral",
"transition-distribution"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.