Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
752
|
2
| null |
575
|
2
| null |
Have you considered Dynamic Liner Models? I don't know enough about currencies or DLMs to give any more guidance, but it may be worth a few minutes of googling to see if you can apply it.
| null |
CC BY-SA 2.5
| null |
2011-03-18T14:51:39.827
|
2011-03-18T14:51:39.827
| null | null |
106
| null |
753
|
1
|
770
| null |
11
|
1581
|
Suppose I have trend following strategy(on close to close data) that is not getting acceptable returns for some time. When should I start thinking about shutting it down?
|
When to shut down a trend following strategy?
|
CC BY-SA 2.5
| null |
2011-03-18T15:40:17.987
|
2011-03-24T03:46:22.633
| null | null |
155
|
[
"arbitrage"
] |
754
|
2
| null |
753
|
6
| null |
You can use a equity based model. Stop trading when your equity drops below your "X-day" equity moving average, and resume trading when your equity crosses above the "X day" equity moving average. You could also do this by measuring the slope of the curve, and not trading when the slope is statistically below 0. I like this method because it does not tie your decision to whether or not the trend following strategy does not work, or even I there is really a trend, but is purely based upon monetary concerns.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-18T15:57:57.960
|
2011-03-18T15:57:57.960
| null | null |
520
| null |
755
|
2
| null |
753
|
9
| null |
You ought to have pre-determined "kill" switches, like a maximum allowable drawdown or time-from-high. Ideally you should get an idea of what these values would be from your backtests.
When you do shutdown a model, don't just throw away the code. The strategy might not be working at the moment, but it could come back in the future. I just heard that models turned-off years ago are now working as of this week because of the higher volatility. Of course it helps if you can do automated paper trading with reasonable accuracy.
| null |
CC BY-SA 2.5
| null |
2011-03-18T16:02:34.780
|
2011-03-18T16:02:34.780
| null | null |
35
| null |
756
|
2
| null |
738
|
7
| null |
They are not mutually exclusive. PCA and clustering are similar but used for different purposes. You could use PCA to whittle down 10 risk factors to say 4 uncorrelated factors, and you could combine securities with different FACTORS into different clusters with offsetting returns and variance characteristics. However, when you say you want to derive risk factors, that implies that you are dealing more with variables, and PCA (or factor analysis) is more appropriate. If you are really interested in risk segments across nominal variables, say asset classes, you would be interest more in clustering.
| null |
CC BY-SA 2.5
| null |
2011-03-18T16:44:59.183
|
2011-03-18T16:44:59.183
| null | null |
520
| null |
757
|
1
|
758
| null |
3
|
3486
|
I am using the Mersenne twister random number generator in Java for a Monte Carlo Simulation. I need a uniform distribution of values between -1 and 1. My code is below (I am importing [org.apache.commons.math.random.MersenneTwister](http://commons.apache.org/math/apidocs/org/apache/commons/math/random/MersenneTwister.html), which is a Apache Commons library that contains the methods I am calling).
```
for(int i = 0; i<= NumberOfTrials-1; i++ ) {
MersenneTwister mtsign = new MersenneTwister();
boolean sign = mtsign.nextBoolean(); // random true or false
MersenneTwister mt = new MersenneTwister();
if (sign=true){
random[i] = mt.nextDouble(); } //i.e. (random number between 0 and 1)
else if (sign=false){
random[i] = - mt.nextDouble(); } // i.e. random number beween -1 and 0
}
```
Each index of the resulting array, `random[]`, contains the same value for some reason (and all are positive as boolean keeps returning true). Any thoughts/advice is greatly appreciated.
|
Mersenne twister random number generator in Java for Monte Carlo Sim.
|
CC BY-SA 2.5
| null |
2011-03-18T17:01:36.673
|
2011-03-25T12:54:12.467
|
2011-03-25T12:54:12.467
|
603
|
603
|
[
"monte-carlo",
"programming",
"random-walk"
] |
758
|
2
| null |
757
|
7
| null |
The problem is that you are creating a new random number generator for each iteration. Move `new MersenneTwister()` out of the loop:
```
MersenneTwister mtsign = new MersenneTwister();
MersenneTwister mt = new MersenneTwister();
for(int i = 0; i<= NumberOfTrials-1; i++ ) {
// use mtsign and mt here
...
}
```
Furthermore, you don't need two generators, you can just rescale:
```
MersenneTwister mt = new MersenneTwister();
for(int i = 0; i<= NumberOfTrials-1; i++ ) {
random[i] = mt.nextDouble() * 2 - 1;
}
```
Lastly, in your code, you may need to push the generator even further up the call tree to avoid generating the same sequence repeatedly.
| null |
CC BY-SA 2.5
| null |
2011-03-18T18:08:34.780
|
2011-03-18T18:14:47.910
|
2011-03-18T18:14:47.910
|
90
|
90
| null |
760
|
1
|
773
| null |
11
|
2066
|
I came across this [article](http://historysquared.com/2011/01/31/open-interest-in-futures-markets-predicts-stock-bond-commodity-and-currency-prices-says-study/) and became curious. Can the futures market's open interest really predict market action?
|
Can the futures market's open interest predict commodity, treasury, and equity returns?
|
CC BY-SA 2.5
| null |
2011-03-18T19:01:20.253
|
2019-09-08T08:00:58.303
|
2019-09-08T08:00:58.303
|
20795
|
520
|
[
"futures",
"trading",
"forecasting",
"prediction",
"open-interest"
] |
761
|
2
| null |
691
|
4
| null |
Here is another link to some non-financial data sets from the University of California Machine Learning Repository:
[http://www.ics.uci.edu/~mlearn/MLRepository.html](http://www.ics.uci.edu/~mlearn/MLRepository.html)
| null |
CC BY-SA 2.5
| null |
2011-03-18T19:21:29.017
|
2011-03-18T19:21:29.017
| null | null |
520
| null |
762
|
2
| null |
364
|
8
| null |
This question requires a comprehensive answer, perhaps beyond the confines of my input box :) Suffices here to state the following:
The First Fundamental Theorem of Asset Pricing states that in an arbitrage-free market, there exists a ("net") present value function, that is, a linear valuation rule whose value is zero when evaluated in any traded cash-flow.
This is an existence theorem, and it does not depend on the theoretical or "real" form of the market. It does not depend on discrete or continuous time modeling, as it does not depend on whether there are transaction costs, trading constraints, or missing markets. All we need to have is the assumption that we can undertake two or more trades simultaneously, that we can scale them up, and that for every given trade, we can have its "mirror" in the market - that is, that we have a linear vector space of traded cashflows.
The Second Fundamental Theorem of Asset Pricing states that when an arbitrage-free market is "complete", the linear valuation rule is unique.
It is also true that these two separate theorems with different implications, are more often than not, presented in a fused form. This can be confusing. Proofs of these facts are virtually in every graduate asset pricing book. My favourite one is Duffie's 'Dynamic Asset Pricing Theory'.
| null |
CC BY-SA 3.0
| null |
2011-03-18T22:19:15.097
|
2012-03-29T20:31:20.233
|
2012-03-29T20:31:20.233
|
114
|
114
| null |
763
|
2
| null |
575
|
3
| null |
This link is to a book that covers this exact question:
[http://onlinelibrary.wiley.com/doi/10.1002/9783527610006.ch9/summary](http://onlinelibrary.wiley.com/doi/10.1002/9783527610006.ch9/summary)
Summary: the models that map to both stock markets and currency markets are those that have an autoregressive feature (curreny markets commonly exhibit this feature, limiting the choice of models that apply to both currencies and stocks => *ARCH).
| null |
CC BY-SA 2.5
| null |
2011-03-18T23:00:00.450
|
2011-03-18T23:00:00.450
| null | null |
214
| null |
764
|
1
|
766
| null |
7
|
3960
|
From what I understand, Black-Scholes equation in finance is used to price options which are a contract between a potential buyer and a seller. Can I use this mathematical framework to "buy" a stock? I do not have the choice using options in the market I am dealing with -- I either buy something or I don't. So I was wondering if B-S be used to decide to buy a stock, the next day, taking its last price, volatility and other necessary variables into account.
Thanks,
|
Using Black-Scholes equations to "buy" stocks
|
CC BY-SA 2.5
| null |
2011-03-19T01:06:54.647
|
2013-04-09T10:48:19.317
|
2011-03-19T12:20:45.817
|
69
|
616
|
[
"option-pricing",
"black-scholes",
"differential-equations",
"prediction"
] |
765
|
2
| null |
753
|
5
| null |
I typically have several tests.
Like Ralph Winters said, one calculation to use is the equity curve. I use "...equity curve slope must be greater than x1..." to continue using a model. Another test is, "...%wins must be greater than x2...". Another is, "...average %return per winning transaction must be greater than x3...". Another is "...%return per losing transaction must be less than x4...". Other tests depend on what phase the economy is in, level of various volatilities, etc.
The idea is to make models that can be compared with each other, older models, newer models, and the level of risk that is acceptable.
| null |
CC BY-SA 2.5
| null |
2011-03-19T01:27:48.857
|
2011-03-19T02:42:23.207
|
2011-03-19T02:42:23.207
|
392
|
392
| null |
766
|
2
| null |
764
|
9
| null |
An equity represents ownership of a company and may be thought of as a derivative on the cash flow. For this reason, equities are valued through discounted cash-flow (DCF) analysis.
An option is a right, though not an obligation, to buy or sell an asset at a fixed price at some point in the future. As per Black-Scholes, the value of an at-the-money option is principally determined by the time to expiration and the volatility of the underlying's price.
Equities are very different from options:
- The volatility of the cash flows cannot be modeled by Brownian motion. Instead, volatility is represented by the discounting factor in DCF used to determine the present value.
- Equities don't have an expiration, so their value can't simply decrease to zero over time.
- Equities don't really confer a right to the cash flow; there is a whole series of corporate governance on that one.
In short, the features of equities and options are so vastly different that their valuation techniques must be distinct.
| null |
CC BY-SA 2.5
| null |
2011-03-19T02:46:31.160
|
2011-03-19T02:46:31.160
| null | null |
35
| null |
767
|
2
| null |
764
|
2
| null |
Yes, this certainly is valid point of view, if you are thinking about purchase the stock outright rather than with options. In theory, if the Implied Volatility of calls was significantly higher the the Implied Volatility for puts, buying the stock would be a better bet than shorting it. This is particularly true for the At-the-Money calls and puts. You could also use the expiration month as a guide to the time frame. Longer time frames indicate more of a trust in fundamentals.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-19T15:10:22.313
|
2011-03-19T15:10:22.313
| null | null |
520
| null |
768
|
2
| null |
764
|
9
| null |
You can look at equity as a call option on the firm. In theory this illustrates the differences between holding equity or debt.
The quick and dirty is that equity holders own the firm, but only after the debt holders are repaid. If you have a simple levered firm with one outstanding debt issue, it as though the equity holders have a call option on the firm with strike equal the face value of debt with expiration equal to the debt maturity date.
You can use the technique to price equity, but it still requires you to value the underlying firm and do the calculation for all outstanding debt. A lot of research tests these pricing models, but there are so many assumptions wrapped up in it, I would have a really hard time using the "call option" price to say that the firm's equity is under- or over-priced.
Regardless, it's an interesting way of looking at the relation between equity and debt. If you're interested in learning more, [Damodaran](http://pages.stern.nyu.edu/~adamodar/) has a [good reference](http://pages.stern.nyu.edu/~adamodar/New_Home_Page/lectures/opt.html) on his website.
| null |
CC BY-SA 2.5
| null |
2011-03-19T15:54:29.263
|
2011-03-19T15:54:29.263
| null | null |
106
| null |
769
|
2
| null |
733
|
2
| null |
Proxy integration is just an approximation which can deviate pretty strongly from the recursion algorithm for thin tranches or extremely large/small loss variances. Proceed with caution.
Also: when you integrate the Gaussian proxy, do you start from $-\infty$ or from 0? Counter-intuitively, you should start from $-\infty$ to get the correct portfolio expected loss.
| null |
CC BY-SA 2.5
| null |
2011-03-19T17:31:39.523
|
2011-03-19T17:31:39.523
| null | null |
89
| null |
770
|
2
| null |
753
|
10
| null |
There are really a few issues here:
1) When do I turn off a model because I believe the model is invalid?
This is a subjective call and depends on many things such as how strong the economic reasoning behind the model is, how crowded the space is, and how poorly the model is performing relative to backtest. With regard to the last point, as a rule of thumb I expect to live through a drawdown twice as bad as the worst drawdown in my backtest, since the backtest is overfit. If you have a Sharpe 5 model and realize a Sharpe 0, it is an easy call to shut down. But with trend following, you are probably getting a Sharpe around 1, so you can easily have poor performance for a few years without the model being statistically invalid.
2) How do I time turning off a model using past performance or some other variable, hoping to turn it back on later.
I have not had much luck with model timing. You are layering a tenuous prediction on top of a tenuous prediction. The only timing models that I know of are conditioning some high freq strats on volatility because volatility tends to be autocorrelated and the economics of high vola regimes rewards liquidity providers.
3) How do I size my portfolio to preserve capital?
You can either size down or stop trading when you start losing too much money. There are many algorithms for this. This is like buying a put option, so expect to pay for it. Or you can size your strat to be able to handle your expected worst performance (such as 2x mdd) comfortably and keep a constant size.
| null |
CC BY-SA 2.5
| null |
2011-03-19T17:34:50.690
|
2011-03-19T18:06:25.000
|
2011-03-19T18:06:25.000
|
443
|
443
| null |
771
|
2
| null |
690
|
4
| null |
Piterbarg (who's a very smart and no-nonsense guy) looks at the problem from the practitioner's perspective, i.e. "will this make the traders happy?". Academicians tend to answer the question "will this make the journal referees happy?". Two different goals.
| null |
CC BY-SA 2.5
| null |
2011-03-19T17:36:57.320
|
2011-03-19T17:36:57.320
| null | null |
89
| null |
772
|
2
| null |
760
|
6
| null |
Maybe a better question title is "Can futures market open interest predict commodity, treasury, and equity returns"?
I saw this paper in an earlier form and it still baffles me. Superficially, it makes sense that price*quantity holds more information than just price when quantity can change quickly (i.e., outstanding futures contracts changes more quickly that outstanding equity shares). Open interest is the dollar value of outstanding futures contracts, so there are two sides to these contracts, and of course each side thinks that they're getting the better deal. So it's not really clear that this should be a bull or bear indicator. If the price component dominates, then this could just be a CPI precursor, which would drive the returns in treasuries and equities.
Also, the open interest term is a rolling average, which makes it highly autocorrelated. On this they regress treasury returns, which are also autocorrelated, so there could be some spurious regression in there. But Hong and Yogo are top of the field, so there's most likely something I'm overlooking and I'm interested in your thoughts.
After I saw this paper, I tried the same thing in equities. I got data from the OneChicago single stock futures market and tried to use open interest and open interest growth to predict equity returns. I couldn't find anything more powerful than short term reversal. The OneChicago single stock futures market has only been around since the mid-2000s and still has relatively thin trading. Maybe in a few more years there will be enough volume to get better tests?
So, I don't think open interest predicts returns in a profitable way. It can predict returns, but not any better than short-term reversal or other documented mechanisms.
| null |
CC BY-SA 2.5
| null |
2011-03-20T03:33:47.020
|
2011-03-20T03:33:47.020
| null | null |
106
| null |
773
|
2
| null |
760
|
8
| null |
[CXO-Advisory](http://www.cxoadvisory.com/) investigate the claim of this paper and conclude that evidence indicates that changes in open interest in futures markets are strong predictors of returns for associated asset classes, even after controlling for a number of conventional predictors.
They state that investors may be able to exploit these predictive powers via tactical asset class allocation but nevertheless warn that the study does not explicitly translate this evidence into a realistic trading strategy.
Source: [http://www.cxoadvisory.com/commodity-futures/futures-market-open-interest-as-return-predictor/](http://www.cxoadvisory.com/commodity-futures/futures-market-open-interest-as-return-predictor/)
| null |
CC BY-SA 2.5
| null |
2011-03-20T09:29:24.120
|
2011-03-20T09:29:24.120
| null | null |
12
| null |
774
|
1
|
776
| null |
15
|
1274
|
I want to start a blog/newsletter and maintain a track record of
trades I recommend. I have a never-expiring demo account for this
purpose.
How do I keep this track record "honest"? Three months from now,
someone can claim I backdated everything, made up trades, etc. Is
there a well-known way to keep traders honest?
I'll be trading OTC FX spot options, so I have to use Saxo Bank, and
can't use a generic site. Will this affect the answer?
I plan to take screenshots of my account daily, but I'm not sure
that's enough.
|
Keeping a track record honest
|
CC BY-SA 2.5
| null |
2011-03-20T14:58:29.037
|
2019-02-16T21:45:22.700
| null | null | null |
[
"trading-systems"
] |
775
|
2
| null |
774
|
6
| null |
One thing I can think of: On your blog/newletter, offer to send daily email updates of your results to them. That way no one can say that you backdate results, or otherwise altered results. And I'm sure everyone will appreciate your keeping things honest.
| null |
CC BY-SA 2.5
| null |
2011-03-20T15:56:47.787
|
2011-03-20T15:56:47.787
| null | null |
520
| null |
776
|
2
| null |
774
|
9
| null |
If what you're worried about is being accused of backdating, then you could try timestamping your articles by some trusted third party. This way you can certify that a document/article was created before a certain date and wasn't modified further on (so in this situation that you've come up with some conclusion/investment decision in the real past). In case of a non-professional context some free services might give you enough credibility:
- timeMarker
- Stamper service
But backdating is only one of the problems you may encounter when trying to reliably report your performance. For example, timestamping won't help the fact that you can selectively publish only the trades that went well or just erase your prior poor record if it wasn't already disseminated. You could try to minimise this risk by keeping the window between actual transactions and publishing recommendations small (no "past trades" popping up in your blog out of nowhere), but in the end I suppose it's a question of trust - do your readers (clients?) feel they should/can trust you and foremost what's at stake (if you're only blogging personally it's different than if you're managing a mutual fund).
Reliable reporting is a much more complex topic, there are numerous examples of real problems occurring in professional context, thus guidelines like [GIPS](http://www.gipsstandards.org/standards/index.html) were created. But still, I feel there's no ultimate solution to the problem. Most of the time it's a mix of legal regulations, business reasoning (you can't really "cheat" your track record that much if you have to later pay it out), professional/ethical standards and some good faith. But sorry, that was probably too much of a digression.
Going back to your question, I think the most important factor is what your goal is. Are you blogging and keeping your transaction journal just for "fun" and educational purposes? Do you sell services/systems? Or maybe this blog is connected in some manner to a financial services company (managing client investments)? Presumably it's the former, but otherwise this would dramatically change the situation, and you would need to think about a much more complete and stringent solution to the problem.
| null |
CC BY-SA 2.5
| null |
2011-03-20T16:16:06.490
|
2011-03-20T16:24:55.387
|
2011-03-20T16:24:55.387
|
38
|
38
| null |
777
|
2
| null |
774
|
9
| null |
Occam's Razor:
Setup a facebook account, or a twitter account, and post your trade recommendations there. They are time/date stamped, easily accessible to others, and cannot be backdated.
| null |
CC BY-SA 2.5
| null |
2011-03-20T17:33:06.583
|
2011-03-20T17:33:06.583
| null | null |
214
| null |
778
|
2
| null |
774
|
7
| null |
For third party tracking of trades, try these guys:
[https://www.timertrac.com/Public/Default.asp](https://www.timertrac.com/Public/Default.asp)
| null |
CC BY-SA 2.5
| null |
2011-03-20T18:04:59.640
|
2011-03-20T18:04:59.640
| null | null |
392
| null |
779
|
2
| null |
760
|
3
| null |
My main concern when I read this paper was that they appear to not be lagging the CFTC data appropriately. The open interest data is reported with almost a week delay. They are using 1yr smoothing of 1mth changes, so that will mitigate some of the look ahead bias, but I think the paper is untradeable.
| null |
CC BY-SA 2.5
| null |
2011-03-20T21:13:43.003
|
2011-03-21T00:32:10.487
|
2011-03-21T00:32:10.487
|
443
|
443
| null |
780
|
2
| null |
774
|
7
| null |
My primary recommendation is: ["eat your own dogfood"](http://en.wikipedia.org/wiki/Eating_your_own_dog_food). You can then share an actual track record. And have your trades audited by a third party.
One good example of this in the quant blogsphere is [MarketSci](http://www.marketsci.com/index.html) (Michael Stokes), and I don't think that you could go wrong by [following his example](https://marketsci.wordpress.com/about/). He uses TimerTrac (mentioned by @bill_080) for some of the auditing ([an example](https://www.timertrac.com/Private/Research/GraphSignals.asp)). He also offers managed accounts.
| null |
CC BY-SA 2.5
| null |
2011-03-20T21:18:22.967
|
2011-03-20T21:18:22.967
| null | null |
17
| null |
781
|
2
| null |
774
|
-1
| null |
Easy. Just put an MD5 hash of your complete text with your trade recommendation, along with its timestamp. Then, any reader can verify the exact date and the description of your recommendation.
| null |
CC BY-SA 2.5
| null |
2011-03-21T15:46:24.737
|
2011-03-21T15:46:24.737
| null | null |
114
| null |
782
|
2
| null |
760
|
2
| null |
I just skimmed the paper, but it looks like Tables 6 through 10 hold the "predictors". Notice the R^2 values. I assume that their R^2=3% is my R^2=0.03. If that's true, I would guess that trading frictions would dominate any of their predictions.
| null |
CC BY-SA 2.5
| null |
2011-03-21T17:05:29.053
|
2011-03-21T17:05:29.053
| null | null |
392
| null |
783
|
2
| null |
774
|
4
| null |
Check out what these guys have done (hashes published to arxiv on fixed dates): [http://arxiv.org/ftp/arxiv/papers/0911/0911.0454.pdf](http://arxiv.org/ftp/arxiv/papers/0911/0911.0454.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-22T02:37:36.370
|
2011-03-22T02:37:36.370
| null | null |
225
| null |
784
|
1
| null | null |
1
|
3506
|
I am putting together an Ultra-High Frequency desk and need to answer the following questions for ordering some rack servers to process about 2 GB of data per second. If anyone has worked at a HFT desk before can you help me out? What kind of software do big HFT shops run?
- How many processors are required, and what clock speed?
- How much memory required?
- How much internal storage would you need?
- Do you require any external storage?
- How many network ports are required?
- How many fibre connections are required?
|
Ultra-High Frequency Trading Help
|
CC BY-SA 4.0
| null |
2011-03-22T03:48:54.870
|
2018-12-29T18:50:30.100
|
2018-11-26T20:35:35.563
|
11723
| null |
[
"high-frequency",
"software",
"hardware"
] |
785
|
2
| null |
784
|
1
| null |
Q: How many Processors are required, and what clock speed?
A: as many as you can get
How much Memory required?
A: as must as you can get
How much internal storage would you need?
A: as much and as fast as you can get, think 500GB SSD
Do you require any external storage?
A: yes, think in terabytes
How many Network Ports are required?
A: at least 2
How many Fibre connections are required?
A: as many as you can get
Which level of AIX software is required?
A: the fastest version, with the fewest nonessential modules
| null |
CC BY-SA 2.5
| null |
2011-03-22T05:56:33.243
|
2011-03-22T05:56:33.243
| null | null |
214
| null |
786
|
2
| null |
784
|
7
| null |
Your question is very broad and cannot be answered by community as it requires a thorough analysis of your business needs. This isn't something that can be done while having a coffee break. Of course one may answer to your questions: "as many & as much & at least & the fastest" but your costs will skyrocket as there is virtually no limit... Do you really want to spend money on an F1 car while all your needs can be satisfied by a decent van?
Long story short: you'll do better by hiring somebody knowledgeable to do the job.
| null |
CC BY-SA 2.5
| null |
2011-03-22T10:38:03.307
|
2011-03-22T10:38:03.307
| null | null |
493
| null |
787
|
2
| null |
764
|
2
| null |
The B-S formula is not such interesting for you as its derivation. This formula is based on the B-S model.
(i) Suppose that the model nicely describe real-life - remember that price movements are assumed to be random and unpredictable so it does not contradict with EMH.
If the model is true, the derivation of the B-S equation tells you how that if you short an option - you can perfectly (non-randomly!) hedge yourself with a given strategy (delta-hedging). What does it mean? It means that if on the real market the price of the option do not coincide with a theoretical value - then using delta-hedging you can make and the maturity on your bank account more money then you need to exercise an option.
Roughly speaking, if real price of the option does not coincide with that which B-S formula gives you - you can make a profit.
Problems? The model is not perfect, i.e. using delta-hedging you are still risky - e.g. in the B-S delta-hedging there are no transaction costs which is artificial.
| null |
CC BY-SA 2.5
| null |
2011-03-22T11:03:19.330
|
2011-03-22T11:03:19.330
| null | null |
464
| null |
788
|
1
|
1097
| null |
5
|
405
|
Seems like the vast majority of all the Hedging literature is dedicated to the speculative side of it. I am searching for quality papers that deal with the link between financial and physical markets and how non-financial companies can manage and control the risks of price movements of commodities and currencies in their core operations.
Does anyone have an advice about what titles to look for?
Thanks!
|
Commodity hedging in non-financial companies - any literature available?
|
CC BY-SA 2.5
| null |
2011-03-22T13:45:22.267
|
2011-05-01T21:01:53.607
| null | null |
636
|
[
"hedging",
"commodities"
] |
790
|
2
| null |
565
|
4
| null |
there is no standard approach to model quanto CDS. In practice, people look at the dynamic hedging costs over time as well as the expected loss from an fx gap in the event of a default of the ref entity. the former is modelled by some correlated brownian (for FX) and mean-reverting processes (for credit - could be Ornstein Uhlenbeck for example). In addition, you need some event correlation of the FX gapping when the reference entity defaults. You see, all a little messy. Don't get me started on the calibration.....
CDS on sovereigns in the country's own currency trade at roughly 50%-60% of their liquid spot contract while Eur-countries trade between 5% (perepherial) and 30% (core)
| null |
CC BY-SA 2.5
| null |
2011-03-22T15:16:10.580
|
2011-03-22T15:16:10.580
| null | null |
637
| null |
791
|
2
| null |
565
|
2
| null |
The consensus seems to be is using jump diffusion process (affine), and then using copula's and/or correlated brownian motions to handle the correlation structure.
Here's a link to a recent paper that discusses these models in great detail, and includes application of these models for modeling quanto cds:
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1153400](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1153400)
| null |
CC BY-SA 2.5
| null |
2011-03-22T15:50:56.637
|
2011-03-22T15:50:56.637
| null | null |
214
| null |
792
|
1
|
793
| null |
10
|
4485
|
I am studying Arbitrage Pricing Theory (APT) and I have a question about calculating factor exposures.
Assume:
\begin{equation}
r = \beta_1r_1 + \beta_2r_2 + ... + \beta_kr_k + r_e
\end{equation}
Where:
$\beta_i$ is the exposure of the asset to a factor
$r$ is the return attributable to a factor
I believe that beta will be the covariance of the factor with the underlying asset. Is this correct? Also how is the return attributable to a specific factor calculated? Is there a single way this is done or are there a variety of approaches?
|
How to perform risk factor calculation?
|
CC BY-SA 3.0
| null |
2011-03-22T15:59:29.637
|
2012-06-10T17:45:55.513
|
2012-06-10T17:45:55.513
|
467
|
373
|
[
"regression",
"factor-models"
] |
793
|
2
| null |
792
|
6
| null |
>
I believe that beta will be the covariance of the factor with the underlying asset. Is this correct?
Close, it's the covariance divided by the variance of the factor.
\begin{equation}
\beta_{f,a} = \frac{\sigma_{f,a}}{\sigma^2_f}
\end{equation}
>
Also how is the return attributable to a specific factor calculated? Is there a single way this is done or are there a variety of approaches?
That depends on how you derive your factors. As mentioned in this [earlier question](https://quant.stackexchange.com/q/738/35), I once derived factors with cluster analysis. Thus, each factor was really a collection of highly correlated large-cap stocks. That meant the factor return was simply the cap-weighted average of all constituent stock returns, just like in a stock index.
| null |
CC BY-SA 3.0
| null |
2011-03-22T16:18:08.473
|
2012-05-29T09:32:56.927
|
2017-04-13T12:46:23.000
|
-1
|
35
| null |
794
|
2
| null |
792
|
-1
| null |
You can use PCA as well, not the cluster analysis.
| null |
CC BY-SA 2.5
| null |
2011-03-22T17:07:25.557
|
2011-03-22T17:07:25.557
| null | null |
464
| null |
796
|
1
| null | null |
7
|
1613
|
I am valuing a binary FX option (european) with a defined strike and term (2Y). I'm using a closed form solution based on Black-Scholes framework. How can I derive the appropriate volatility to use from the market data I have?
Market Data (all quoted in implied volatility):
```
ATM
25D Risk Reversal
25D Butterfly
10D Risk Reversal
```
|
How to derive appropriate volatility for a binary option (with strike/term) from market data?
|
CC BY-SA 2.5
| null |
2011-03-22T19:54:13.400
|
2011-03-26T14:01:38.657
| null | null |
223
|
[
"volatility",
"options",
"fx"
] |
797
|
2
| null |
796
|
3
| null |
Binary options can be replicated (in theory) by trading long and short call options with very close strikes. Take the Black-Scholes formula and differentiate it over the strike. You will need to know the slope of the implied volatility skew around the strike of the binary option. This you can do by fitting a parametric formula (I don't know exactly what is used in FX, also SABR?) to your market data. If your option's strike is not too far away from ATM, you should get a reasonable number.
Disclaimer: I don't specialise in FX.
| null |
CC BY-SA 2.5
| null |
2011-03-22T20:15:42.097
|
2011-03-22T20:15:42.097
| null | null |
89
| null |
798
|
1
|
820
| null |
13
|
429
|
Let's say I am hedging an exotic instrument $E$ with $N$ liquid instruments $L_i$, each of which has an associated hedging ratio $R_i$ and a bid-ask spread $\delta_i$ (per dollar of notional). What would you recommend as a cost function to balance the completeness of the hedge and minimize the hedging cost?
|
Cost function for hedging portfolio
|
CC BY-SA 2.5
| null |
2011-03-22T20:25:32.217
|
2011-03-28T19:24:26.057
| null | null |
89
|
[
"hedging"
] |
799
|
1
| null | null |
8
|
1281
|
I'm looking for a heuristic way to calculate the probabilities of being in the money at expiry for non-defined risk options combinations (listed options).
I use delta as a proxy for this probability of success for single options, which makes an implicit distributional assumption.
For spreads I use width of the spread (or the worst drawdown/largest possible gain for more complex defined risk combinations) and $ received/paid for it. I treat the options combos as if they were bets and I get the implied probabilities from the prices of those bets.
What is a good heuristic for estimating such probabilities for straddles and strangles (and other non-defined risk combinations)?
EDIT: To clarify the above: a straddle/strangle is a bet. What's the probability of this bet being profitable at expiration? How do I imply the probability of success of this bet?
|
Heuristics for calculating theoretical probabilities of being ITM at time T for listed options
|
CC BY-SA 2.5
| null |
2011-03-23T03:16:36.510
|
2011-03-26T13:52:02.077
|
2011-03-24T13:59:45.487
|
358
|
358
|
[
"options",
"probability"
] |
800
|
2
| null |
792
|
3
| null |
If you have a series of observations of the return as a vector, $\mathbf{r}$ with corresponding observations of the factor returns in matrix $Z$, then the least squares estimate of the vector of betas is
$$\hat{\beta} = \left(X'X\right)^{-1} X'\mathbf{r},$$
where $X$ is the matrix with $Z$ and a column of all ones (for the intercept term). The last value of $\hat{\beta}$ will be the estimate of the 'idiosyncratic' return. In general, the estimate of the $j$th coefficient, $\hat{\beta_j}$ will not be correlation of the return to the return of the $j$th factor, nor will it be that value adjusted for the volatility of the factor.
If you have only one factor (in which case it is CAPM, not APT), then the computation does simplify. Also, if the sample returns of the different factors are independent vectors (highly unlikely to happen by accident), you will get the simplification.
See [wikipedia](http://en.wikipedia.org/wiki/Ordinary_least_squares#Estimation) for more on multiple linear regression.
| null |
CC BY-SA 2.5
| null |
2011-03-23T04:33:38.810
|
2011-03-23T04:33:38.810
| null | null |
108
| null |
801
|
2
| null |
792
|
10
| null |
I don't have much to add, but wanted to address the "price of risk" question.
APT is kind of "economics"-free and tries to price assets without the utility maximization required in CAPM/ICAPM. Ross's APT observes that groups of assets move together (e.g., tech stocks) and that is the risk you're bearing because the idiosyncratic risk, like the firing of HP's CEO, can be diversified away. Because this risk is easily diversifiable, the market won't pay you to take it. So in your APT model these factors are returns to asset classes, industries, etc.
Although the model looks the same, in Merton's ICAPM, the factors are state variables (e.g., industrial production, inflation). These are purely academic points -- in practice you run a multivariate regression with return on the LHS and whatever factors you think are priced on the RHS. OLS and GMM are common. So you'll estimate $$E ~ \left[ ~r_i~ \right] = \alpha_i + \beta_i^1 f_1 + \beta_i^2 f_2 + \ldots + \beta_i^k f_k$$
Your final question.
>
Also how is the return attributable to
a specific factor calculated?
Now you regress the returns back on the betas. $$E ~ \left[ ~r_i~ \right] = \sum_{j \in K} \lambda_j \beta_i^j$$
Where $\lambda_j$ is the return to factor $j$. Typically the Fama-MacBeth approach is used here. If you've done it correct and found something, $\lambda > 0$ (i.e., the market is paying you to take this risk).
| null |
CC BY-SA 3.0
| null |
2011-03-23T20:49:49.977
|
2012-05-30T13:32:58.037
|
2012-05-30T13:32:58.037
|
106
|
106
| null |
802
|
1
| null | null |
21
|
13566
|
Do you know of any papers which consider pairs trading (or statistical arbitrage) on foreign exchange?
I couldn't find any. I asked this question on several forums and got no reply. Thus, I guess this trading strategy is inapplicable due to the properties of currency markets or other fundamental reasons. However, it is not obvious to me what these reasons are.
|
Is statistical arbitrage on FX possible?
|
CC BY-SA 4.0
| null |
2011-03-23T22:14:43.023
|
2020-07-17T22:04:48.380
|
2020-03-31T12:10:42.113
|
20795
|
370
|
[
"fx",
"arbitrage",
"reference-request",
"research",
"pairs-trading"
] |
803
|
2
| null |
14
|
4
| null |
Deltas represent hedge ratio; i.e. 5%, 10%, 25%...i.e. buy two 50 delta puts, buy 100 shares of stock for perfect hedge at price, done. Delta volatility "smile" should be represented with the smallest delta having the highest volatility to the largest delta having the smallest volatility, being the at the money option, struck at the price of the stock. 50 delta put on \$100 IBM is 100 strike. Buy 2 50 delta puts, sell 100 shares IBM at \$100. Higher volatility options have less chance of ending up in the money at expiration. easy way to think of this is 5 delta has 5% chance of ending up in the money at expiration. Instead of hedging with the underlying contract, lower deltas can be offset by selling other options against...buy two 5 deltas and sell one 10 delta against it.
| null |
CC BY-SA 3.0
| null |
2011-03-23T22:52:51.767
|
2016-08-10T07:54:14.220
|
2016-08-10T07:54:14.220
|
18977
| null | null |
804
|
2
| null |
764
|
2
| null |
As the BSM gives a call price as function of Stock price, volatility and other inputs, c(0) = BSM[Stock, Strike, volatility, riskfree rate, term], it seems to me you could use it in an analogous way to implied volatility (i.e., implied vol is the volatility input that produce a model output = traded market price).
The analogous use, i think, would be to input/assume a volatility, then given an observed (traded) option price, simply iterate to solve for the Stock price that calibrates the BSM = the traded option price; i.e., conditional on a given volatility assumption, this would be the fair stock/asset price implied by the BSM.
... of course, in addition to the inherent model risk, you have to assume a volatility [that is NOT the implied volatility] so, i think it's Merton-esque in that you are hinging it on the hard to estimate volatility.
| null |
CC BY-SA 2.5
| null |
2011-03-24T00:32:52.727
|
2011-03-24T04:53:49.483
|
2011-03-24T04:53:49.483
|
35
|
640
| null |
805
|
2
| null |
802
|
4
| null |
Disclaimer: I know nothing about FX trading, other than that I've heard something to the effect of "The first rule of FX trading is that you do not trade FX. The second rule..." you know how it goes.
I'm not into macroeconomics, but I get the impression that the benchmark for FX models is a random walk. That is to say that the fundamentals have nothing to say about FX at anything on a short horizon, which I think is considered four years. I think what has complicated a lot of the research here is limited data in floating exchange rate regimes, small policy interventions, and rare huge policy interventions.
I think Stock and Watson have the best, recent exchange rate models. These papers won't discuss trading, but could be thought-provoking in how you look at the problem
[JASA 2002](http://econpapers.repec.org/article/besjnlasa/v_3a97_3ay_3a2002_3am_3adecember_3ap_3a1167-1179.htm), Journal of Business & Economic Statistics 2002 (sorry, couldn't find link).
HTH (someone with practical knowledge will have to chime in with how to implement :) )
| null |
CC BY-SA 2.5
| null |
2011-03-24T02:12:15.973
|
2011-03-24T02:12:15.973
| null | null |
106
| null |
806
|
2
| null |
728
|
7
| null |
Getting something up and running quickly -- i.e. data manipulation and exploration are activities R are adept at, and there are a plethora of packages to help you. Flexibility and speed (of research) are R's primary strengths. I feel memory and computing power are less expensive than the thought cycles used to explore an idea.
If you're entering a production level arms race, obviously R is not the answer. However, I find R acceptable for production -- enough to plug it into an institutional order management system. As long as your investment strategy is based on predictive market analytics, I don't see a drastic need for speed.
| null |
CC BY-SA 2.5
| null |
2011-03-24T03:35:46.833
|
2011-03-24T03:35:46.833
| null | null |
641
| null |
807
|
2
| null |
799
|
3
| null |
For a straddle, the probability of both legs being in the money is zero :-)
The probability of one of the legs being in the money is essentially 1.
For a strangle, the probability of one of the legs being in the money at expiration is the sum of the absolute values of the deltas of the two legs of the strangle.
( think about one side of the strandge close to the money, and the other side far out of the money... the total probability has to be greater than the probability of the near leg along)
| null |
CC BY-SA 2.5
| null |
2011-03-24T03:37:24.023
|
2011-03-24T03:37:24.023
| null | null |
214
| null |
808
|
2
| null |
753
|
1
| null |
Use Maximum Drawdown at Risk with an exceedance probability test
| null |
CC BY-SA 2.5
| null |
2011-03-24T03:46:22.633
|
2011-03-24T03:46:22.633
| null | null |
641
| null |
809
|
2
| null |
802
|
1
| null |
Every FX trade is fundamentally a pairs trade.
e.g. EUR/USD is a pairs trade on euro's vs dollars.
Given this fundamental 'pairing', talking about pairs trading on forex pairs becomes, well, redundant.
| null |
CC BY-SA 2.5
| null |
2011-03-24T04:31:24.060
|
2011-03-24T04:31:24.060
| null | null |
214
| null |
810
|
2
| null |
802
|
1
| null |
You may be interested in trading using correlations between different quotes - then it is like optimal selection theory for a usual portfolio. The only difference is in the model for FX quotes (while in optimization of portfolios stock models are used) - this model I am also looking for and cannot you advise anything at the moment.
| null |
CC BY-SA 2.5
| null |
2011-03-24T09:22:19.107
|
2011-03-24T09:22:19.107
| null | null |
464
| null |
811
|
1
|
812
| null |
2
|
1694
|
What are the applications of binomial trees?
|
What are binomial trees and how are they used?
|
CC BY-SA 2.5
| null |
2011-03-24T11:02:49.320
|
2011-03-24T12:38:20.303
| null | null | null |
[
"binomial-tree"
] |
812
|
2
| null |
811
|
1
| null |
From wiki's entry
>
In finance, the binomial options
pricing model (BOPM) provides a
generalizable numerical method for the
valuation of options. The binomial
model was first proposed by Cox, Ross
and Rubinstein (1979). Essentially,
the model uses a "discrete-time"
(lattice based) model of the varying
price over time of the underlying
financial instrument. In general,
binomial options pricing models do not
have closed-form solutions.
See the full post, [http://en.wikipedia.org/wiki/Binomial_options_pricing_model](http://en.wikipedia.org/wiki/Binomial_options_pricing_model), for methodology/implementation guidelines.
| null |
CC BY-SA 2.5
| null |
2011-03-24T11:17:18.510
|
2011-03-24T11:17:18.510
| null | null |
471
| null |
813
|
2
| null |
811
|
1
| null |
You can also read about Cox-Ross-Rubinstein model (see e.g. Shreve, Stochastic Calculus for Finance I). Binomial trees are discrete-time models assuming that at each step there are only two possibilities for the change of the price.
| null |
CC BY-SA 2.5
| null |
2011-03-24T12:38:20.303
|
2011-03-24T12:38:20.303
| null | null |
464
| null |
814
|
2
| null |
788
|
2
| null |
Hedging is hedging, it's more related to the market you're trading on than to what your goals are or whether you are a bank or an industrial corporation. Granted, some institutions may be able to trade things others cannot, but in principle the same models will aply if your position in the market is the same (i.e. most financial models used for hedging assume that the trades you make do not move the market price significantly).
| null |
CC BY-SA 2.5
| null |
2011-03-24T20:58:34.920
|
2011-03-24T20:58:34.920
| null | null |
89
| null |
815
|
1
| null | null |
7
|
1160
|
I'm currently working on my Masters project related to accelerating Greeks computations for CVA on mixed interest rate portfolios. I would like to know about the status of technology for CVA and its Greeks computations in the industry (mainly related to speed of computation).
Example situation:
- Portfolio of 100 000 instruments
- Mixture of IR Swaps, Swaptions on multiple currencies
- Consider case with credit/IR correlations AND without them
Question: How long (approximately, or simply mention the order) would it take on your system (or system you know) to compute total CVA (including all the netting agreements, collaterization stuff...) and sensitivities of it to every yield curve used, vol surface?
If it is not too confidential, mention the underlying technology (cpu cluster, gpus) and maybe also methods used (like Longstaff-Schwartz); you can skip the name of institution.
Why I need this? I do have a few numbers from local smaller banks, but I'd like to get a broader picture for the need of accelerated methods for these computations.
(Basel III is coming soon, so this will be mandatory for every single serious bank.)
I hope it is clear what I'm seeking.
|
Credit Valuation Adjustments -- computation issues
|
CC BY-SA 2.5
| null |
2011-03-25T12:29:04.777
|
2017-05-25T00:28:33.650
|
2011-03-25T14:15:00.813
|
35
|
647
|
[
"credit",
"valuation",
"gpgpu",
"cva",
"adjustments"
] |
816
|
1
| null | null |
2
|
1330
|
I am implementing a method in Java to calculate the variance, covariance, and value at risk for a portfolio, which should be flexible for use with any number of assets in a portfolio. I am struggling with how to calculate the covariance of the assets as I can only find formulae to do so for two or three sets of values.
Java has a built-in [library](http://commons.apache.org/math/apidocs/org/apache/commons/math/stat/correlation/Covariance.html) to calculate the covariance of two assets and also to calculate the covariance matrix. However, I am not sure how to find the covariance for a portfolio that can contain any number of assets.
|
Covariance for arbitrarily large portfolios
|
CC BY-SA 2.5
| null |
2011-03-25T13:11:03.590
|
2011-03-25T15:36:21.583
|
2011-03-25T14:24:50.383
|
35
|
603
|
[
"risk",
"value-at-risk",
"variance",
"programming",
"covariance"
] |
817
|
2
| null |
815
|
2
| null |
This book is quite good as a starting point:
[http://www.amazon.co.uk/Counterparty-Credit-Risk-Challenge-Financial/dp/047068576X](http://rads.stackoverflow.com/amzn/click/047068576X)
| null |
CC BY-SA 2.5
| null |
2011-03-25T13:27:19.410
|
2011-03-25T13:27:19.410
| null | null |
89
| null |
818
|
2
| null |
816
|
1
| null |
Have a look at [http://en.wikipedia.org/wiki/Covariance_matrix](http://en.wikipedia.org/wiki/Covariance_matrix) - especially the properties part. According to [http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_covariance_matrix.htm#Animation_covariance%20matrix](http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_covariance_matrix.htm#Animation_covariance%20matrix), if you have a matrix $X$ of assets (assets in columnes, returns in rows), you can calculate the covariance matrix as $\Sigma=[XX^T]/n$, where $n$ is the size of the sample. This should be rather simple in Java, in R it would look something like
```
Sigma <- function(X){
mu <- apply(X,1,mean)
n <- ncol(X)
Sigma <- X%*%t(X)/n
}
```
| null |
CC BY-SA 2.5
| null |
2011-03-25T15:08:46.223
|
2011-03-25T15:32:43.563
|
2011-03-25T15:32:43.563
|
357
|
357
| null |
819
|
2
| null |
816
|
8
| null |
>
I am implementing a method in Java to
calculate the variance, covariance,
and value at risk for a portfolio,
which should be flexible for use with
any number of assets in a portfolio. I
am struggling with how to calculate
the covariance of the assets as I can
only find formulae to do so for two or
three sets of values.
Are you sure you are up to the task? Do you have access to [R](http://www.r-project.org) (hey, it's free and open source) or Matlab (hey, [Octave](http://www.octave.org) is free and open source) or something similar (hint: no, not Excel) to prototype this?
Otherwise, I don't even know where to start as there is so much more to this:
- non-synchronocity of returns (as your assets may not all trade at the same time),
- missing observations (leading to non-positive definite matrices),
- roundoff error,
- modeling issues,
- factor-models for dimension reduction as you do not want N x N for really large N.
There have literally been shelves full of dissertations and practitioner books been written on this. Read some---fifteen years ago we all read the first RiskMetrics (now part of MSCI) manual which was pretty novel and path-breaking then. It has answers to your questions too.
A decade ago, I did something like this for a universe of 200 assets in Perl (don't ask) and it can be done that way. That doesn't mean it should be done that way. Besides learning about the underlying (financial econometrics) math, you should also learn about some numerical libraries for Java. No need to reinvent the wheel.
| null |
CC BY-SA 2.5
| null |
2011-03-25T15:36:21.583
|
2011-03-25T15:36:21.583
| null | null |
69
| null |
820
|
2
| null |
798
|
6
| null |
I would assign the cost of incompleteness as the 90th percentile of N-period losses expected on the mis-hedged portfolio (where N is perhaps 5 trading days -- enough for a trader to get hit by a bus and someone else to catch up on his book). This is nicely compatible with VaR computations, corrects for the fact that expected cost of a mis-hedge is usually zero, and doesn't involve any tricky utility function theory.
You sometimes see more precise measurements made. For example some papers in the 1990s calculated the exact optimal hedging strategy for European options given a particular bid-offer spread on the underlying and (I seem to recall) utility function assumptions.
If you like utility functions a lot, clearly you can assign one to the variance (or other metric) of P&L arising from mis-hedging, and use the utility function to turn that directly into a cost. You can approximate the "right" function parameters by, say, looking at your firm's recent returns and Sharpe ratio.
| null |
CC BY-SA 2.5
| null |
2011-03-25T18:52:45.920
|
2011-03-28T19:24:26.057
|
2011-03-28T19:24:26.057
|
254
|
254
| null |
821
|
1
|
825
| null |
11
|
8500
|
Pls explain and discuss these limitations, and explain which models can I use to overcome these limitations. Alternatively, provide examples of how to modify the original Black Scholes to overcome these limitations.
|
What are the main limitations of Black Scholes?
|
CC BY-SA 2.5
| null |
2011-03-25T22:44:02.430
|
2011-03-30T00:36:51.497
|
2011-03-25T23:21:01.870
| null | null |
[
"black-scholes"
] |
822
|
2
| null |
821
|
3
| null |
Technical assumptions are below. I think in practice the most vexing assumptions are (i) Brownian motion assumption that has returns as normal and therefore future price as lognormal (the existence of volatility smiles refutes lognormal prices) and (ii) constant volatility assumption (also empirically refuted). Original BSM is Euro only non-dividend, but many assumptions can be overcome with extensions: American-style, dividends, changing volatility.
Assumptions used to derive BSM differential equation (source: John Hull):
Stock price follows a Weiner process (itself a particular Markov stochastic process) with a constant volatility
Short selling is allowed
No transaction costs and no taxes; securities are perfectly divisible
Dividends are not paid
There are no (risk-less) arbitrage opportunities
Security trading is continuous
The risk-free rate of interest is constant and the same for all maturities
| null |
CC BY-SA 2.5
| null |
2011-03-26T01:26:47.097
|
2011-03-26T01:26:47.097
| null | null |
640
| null |
823
|
2
| null |
799
|
2
| null |
I'm probably missing something, but why not apply Black-Scholes to
each leg and add the results to get the price distribution for the
spread? You'll get a non-closed-form result, but can evaluate it to
arbitrary precision using numerical methods.
To add probability distributions:
```
Suppose Z = X + Y where X and Y are independent probability
distributions. Then (PDF = probability distribution function, CDF =
cumulative distribution function):
P(Z=z) = P(X=x)*P(Y=z-x) integrated over all x, or (Mathematica format):
PDF[Z,z] = Integrate[PDF[X,x]*PDF[Y,z-x],{x,-Infinity,+Infinity}]
A mathematically equivalent form:
CDF[Z,z] = Integrate[CDF[X,x]*PDF[Y,y],{y,-Infinity,z-x},{x,-Infinity,+Infinity}]
(derivation left as exercise to the reader)
```
| null |
CC BY-SA 2.5
| null |
2011-03-26T13:52:02.077
|
2011-03-26T13:52:02.077
| null | null | null | null |
824
|
2
| null |
796
|
2
| null |
It might be easier to use the Black-Scholes formula for binary options:
[http://en.wikipedia.org/wiki/Binary_option#Black-Scholes_Valuation](http://en.wikipedia.org/wiki/Binary_option#Black-Scholes_Valuation)
then add the distributions for each leg:
[Heuristics for calculating theoretical probabilities of being ITM at time T for listed options](https://quant.stackexchange.com/questions/799/heuristics-for-calculating-theoretical-probabilities-of-being-itm-at-time-t-for-l/823#823)
and then use numerical methods to calculate what volatility makes the
legs match the quotes prices.
| null |
CC BY-SA 2.5
| null |
2011-03-26T14:01:38.657
|
2011-03-26T14:01:38.657
|
2017-04-13T12:46:23.037
|
-1
| null | null |
825
|
2
| null |
821
|
4
| null |
Actually, handling dividends is fairly easy:
[http://en.wikipedia.org/wiki/Black-scholes#cite_note-div_yield-3](http://en.wikipedia.org/wiki/Black-scholes#cite_note-div_yield-3)
David mentions this above but "Stock price follows a Weiner [sic]
process" is worth a little more discussion. Recently, USDJPY fell 300
pips in just a few minutes. If you accept that USDJPY follows a Wiener
process, the odds of this happening even once in a million years are
astronomical. USDJPY has done something equally unlikely earlier (250
pips in a few minutes if I remember correctly).
The problem: once something falls "a lot" quickly, it's likely to fall
even further. In other words, a loss of 300 pips is 5 minutes is more
likely than a loss of 75 pips in 5 minutes.
The solution is to use "fat-tailed" distributions:
[http://en.wikipedia.org/wiki/Fat_tail#Applications_in_economics](http://en.wikipedia.org/wiki/Fat_tail#Applications_in_economics)
but, of course, you then have to decide which fat-tailed
distribution to use.
I'm not sure the volatility smile disproves lognormal distribution. My
theory on the volatility smile:
[Why does implied volatility show an inverse relation with strike price when examining option chains?](https://quant.stackexchange.com/questions/27/why-does-implied-volatility-show-an-inverse-relation-with-strike-price-when-exami)
| null |
CC BY-SA 2.5
| null |
2011-03-26T14:14:37.980
|
2011-03-26T14:14:37.980
|
2017-04-13T12:46:23.037
|
-1
| null | null |
828
|
1
|
829
| null |
-2
|
697
|
What's the right way to take a series of returns and convert it into a continuous index? Let's say I want to show the performance of a strategy starting from 1, and adding on returns so that I get an equity curve, should I be using `cumsum(1 + returns)` or `cumprod(1 + returns)`?
|
Convert returns into an index?
|
CC BY-SA 2.5
| null |
2011-03-27T17:34:37.607
|
2011-03-27T18:52:48.753
| null | null |
658
|
[
"returns",
"equity-curve"
] |
829
|
2
| null |
828
|
4
| null |
It should be cumprod. Say you have an index of 0.7, and a daily return of -10%. The new index should be 0.63, not 0.6.
| null |
CC BY-SA 2.5
| null |
2011-03-27T18:52:48.753
|
2011-03-27T18:52:48.753
| null | null |
357
| null |
832
|
1
| null | null |
7
|
922
|
This is a bit DSP-related: so if you turn your non-stationary time series into a stationary process, you'll probably see that it is not periodic.. This is an issue for Fourier-based techniques because they are not local in frequency. Now, besides wavelets (some types are causal btw), which other causal techniques can you use? (and ARMA is not it). I tried Empirical Mode Decomposition (HHT), but that's not causal; I tried Intrinsic Time-scale Decomposition: not causal either. Wavelets are pretty old and I would think something better would have been "discovered" by now? Does anyone know of a good causal signal processing technique that deals well with non-periodicity? Thanks!!
|
DSP: stationary non-periodic signal: what's the best causal technique?
|
CC BY-SA 2.5
| null |
2011-03-27T23:12:09.317
|
2011-09-17T16:48:37.493
| null | null |
659
|
[
"arima"
] |
833
|
2
| null |
832
|
2
| null |
The issue with wavelets is that you'll have some boundary distortions so be careful when exploiting the results.
| null |
CC BY-SA 3.0
| null |
2011-03-28T01:18:02.843
|
2011-07-05T10:15:07.213
|
2011-07-05T10:15:07.213
|
134
|
134
| null |
834
|
1
|
836
| null |
12
|
2022
|
In reference to the original Black Scholes model, what approach is best to test the model in a rigorous way? Is there a standard approach that can accomplish this in a reasonable amount of time?
Details I require:
- number of trials,
- which software to use, formulas etc.
- any other information that I should be aware of
*
This should be able to be done on a laptop with a Core i5 processor with a graphics card.
|
How to conduct Monte Carlo simulations to test validity of Black Scholes for a specific option?
|
CC BY-SA 2.5
| null |
2011-03-28T10:53:40.497
|
2011-03-31T17:54:17.040
|
2011-03-28T12:10:52.427
| null | null |
[
"black-scholes",
"backtesting"
] |
835
|
2
| null |
834
|
3
| null |
Write out your model as an SDE, simulate it and compare the result with an analytical solution (if you've got one).
| null |
CC BY-SA 2.5
| null |
2011-03-28T12:18:05.973
|
2011-03-28T12:18:05.973
| null | null |
89
| null |
836
|
2
| null |
834
|
5
| null |
- I recommend to use MATLAB / Excel for simplicity - depends which one do you already know.
- Write down the SDE for geometic brownian motion (to simulate stock price over time) on paper, as quant_dev mentioned.
Discretize it using i.e. forward Euler discretization (see Wikipedia), code up a MC
simulation to simulate it for the time period you want to price your options.
Don't forget to use the risk-free dynamics in the SDE, otherwise you wont converge to BS price.
- Code the $f(S_T)$ payoff function for your option payoff.
- Calculate the expected (average over simulations), discounted payoff.
With 10 000 simulations, or even 100 000, there should be a decent convergence of your simulation (error at $10^{-4}$) - your CPU should handle this in a few mins max.
| null |
CC BY-SA 2.5
| null |
2011-03-28T13:31:43.853
|
2011-03-28T15:34:30.067
|
2011-03-28T15:34:30.067
|
35
|
647
| null |
837
|
2
| null |
834
|
0
| null |
I think what you really need to do is test the results of the analytic/simulated solution against the actual AND future historical price. So you need to get historical data for the specific option. That, to me, is much more interesting since it will tell you if Black-Scholes is working vs. the reality. Of course you will need a large number of historical data points to tackle this.
| null |
CC BY-SA 2.5
| null |
2011-03-28T15:46:57.540
|
2011-03-28T15:46:57.540
| null | null |
520
| null |
838
|
2
| null |
834
|
2
| null |
On the software end, if you want something quick/dirty I would personally go with Matlab/R/python however if you want something a bit more rigorous (e.g. payoff classes, "better" SDEs) something OO like C++ would really be the route to take.
The basic is fairly simple here's a quick sample of what it should look like:
```
double variance = vol*vol*expire;
double rootVariance = sqrt(variance);
double halfVar = -0.5*variance;
double SpotPlusOne = s*exp(r*expire+halfVar);
double Spot;
double runningSum=0;
for (unsigned long i=0; i < NumOfPaths; i++)
{
double SN = SNByBoxMuller();
Spot = SpotPlusOne*exp(rootVariance*SN);
double PayOff = Spot – strike;
PayOff = PayOff >0 ? PayOff : 0;
runningSum += PayOff;
}
double mean = runningSum / NumOfPaths;
mean*=exp(-r*expire);
return mean;
```
The SNByBoxMuller() is just the standard way of generating a random number from a standard normal distribution from Box Muller.
| null |
CC BY-SA 2.5
| null |
2011-03-28T16:10:51.620
|
2011-03-28T16:10:51.620
| null | null |
122
| null |
839
|
1
| null | null |
28
|
5374
|
I was thinking about writing my own backtester and I realize I have to make some assumptions. So I was hoping I could post what I am planning on doing and hopefully some of you can give me some ideas on how to make it better (I'm sure there is a lot that can be improved).
First of all, my strategy involves holding stocks for usually some days, I am not doing (probably any) intra-day trading.
So here is what I was thinking. First, I would buy some minute OHLC stock quotes covering the stocks I am interested in (thinking about buying some from pitrading.com, is their quality acceptable?). Then if the algorithm triggers a buy or sell at some bar, I would "execute" the order using the high or low of the very next bar (attempting to be as pessimistic as possible here). One thing I am curious about is bid/ask, so I was thinking about maybe adding/subtracting a few cents to take this into account when buying/selling. I would just see what these values have been recently (difference between bid/ask and quote for some recent data on these stocks and then just use these numbers as I wouldn't be backtesting that far back). I would assume that I can buy/sell all I want then at that price.
Lastly I would include the cost of commission in the trade. I would neglect any effect my trade would have on the market. Is there any rough guideline using volume to estimate how much you would have to buy/sell to have an effect?
I would also simulate stop-loss sell orders and they, too, would be executed at the next bar low after the price passed the threshold.
That's it, it will be pretty simple to implement. I am just hoping to make it conservative so it can give me some insight into how well my program works.
Any thoughts or criticisms about this program would be greatly appreciated. I am new at this and I am sure there are some details I am missing.
|
How to design a custom equity backtester?
|
CC BY-SA 3.0
| null |
2011-03-29T06:18:53.347
|
2014-08-20T15:16:41.760
|
2011-09-15T21:04:53.203
|
1106
|
667
|
[
"trading",
"backtesting",
"quant-trading-strategies",
"equities"
] |
840
|
2
| null |
466
|
2
| null |
I've priced similar animals with a naive N-factor method, adding a convexity adjustment for the swap rates. But I'm not sure this is very orthodox...
| null |
CC BY-SA 2.5
| null |
2011-03-29T09:12:16.807
|
2011-03-29T09:12:16.807
| null | null |
668
| null |
841
|
1
|
853
| null |
9
|
1443
|
Nowadays structured products (or packages) with complex payoff diagrams are omnipresent.
Do you know of any software, add-ons, apps, code whatever, that enables you to enter a payoff diagram or a cashflow profile which gives you the basic building blocks like the underlying, zero coupon bonds and esp. all the option components with their different strikes to replicate this payoff?
EDIT: Because some people asked what the input of such a tool could be, have a look at this example - I am asking for a software that is able to do this kind of decomposition automatically:
[http://www.risklatte.com/Articles_new/Exotics/exotic_28.php](http://www.risklatte.com/Articles_new/Exotics/exotic_28.php)
|
Software for decomposing payoff diagrams into plain vanilla products
|
CC BY-SA 4.0
| null |
2011-03-29T09:33:46.040
|
2019-10-13T21:00:43.270
|
2019-10-13T21:00:43.270
|
12
|
12
|
[
"option-pricing",
"software",
"replication"
] |
842
|
2
| null |
839
|
13
| null |
For "maximum pessimism" you should calculate thus:
- for longs - enter at the bar high and
exit at the bar low on bar following
signal bar
- for shorts - enter at the bar low and
exit at the bar high on bar following
signal bar
I had previously heard this approach to back testing called the "torture test."
| null |
CC BY-SA 2.5
| null |
2011-03-29T11:02:03.053
|
2011-03-29T11:02:03.053
| null | null |
252
| null |
843
|
1
|
873
| null |
5
|
1682
|
I am substituting reasonable values in the below fomula (like r=0.12, T=20, nColumn=16, sigma=0.004)...why is probability coming out to be greater than 1? Any help? Thanks!
```
del_T=T./nColumn; % where n is the number of columns in binomial lattice
u=exp(sigma.*sqrt(del_T));
d=1./u;
p=(exp(r.*del_T)-d)./(u-d); % risk neutral probability
```
|
Risk neutral probability in binomial lattice option coming greater than 1...what's wrong?
|
CC BY-SA 2.5
| null |
2011-03-29T18:42:34.480
|
2011-04-01T08:40:44.203
| null | null | null |
[
"probability",
"binomial-tree"
] |
844
|
2
| null |
843
|
0
| null |
Simple: decrease the time step. The binomial tree is just an approximation, and you can't really call $p$ a genuine probability.
Your parameters are also rather far from "typical". I would choose:
$r = 0.05$ (still higher than the current risk-free rate)
$\sigma = 0.2$ (more typical volatility value)
| null |
CC BY-SA 2.5
| null |
2011-03-29T20:03:55.700
|
2011-03-29T21:42:07.130
|
2011-03-29T21:42:07.130
|
89
|
89
| null |
845
|
1
|
846
| null |
4
|
853
|
I have just started applying Binomial-Lattice, however I am yet to fully understand few things. My questions are:
- What is the concept of working backward (left side) from the values in terminal (farthest) nodes at the right side. Why do we need to do backward induction? I started my first node with some value S, at say time t=1, then all I need to know are the option values anywhere ahead of this time till the option expires. What is the need and meaning of the values obtained by backward induction and how are those different from forward induction?
- Now if I apply backward induction, then the values I get at the starting node (the left most node) is far greater than the values at any other node. What does that mean?
Let's take this for example: I start with S=1.5295e+009 at the starting node, and then after using Binomial-Lattice and doing backward induction, I get 9.9708e+10 at the starting node. Why has it increased this much and what does that imply? If I decrease my time step by two times, then I further get very high values like -1.235e+25
- The values that we get in each node as we move ahead of starting node (i.e. the values on the nodes right hand side to left most node) are the value analogues to Present Value (PV) at that time or Net Present Value (NPV)?
EDIT: This is my Matlab code for binomial lattice:
```
function [price,BLOV_lattice]=BLOV_general(S0,K,sigma,r,T,nColumn)
% BLOV stands for Binomial Lattice Option Valuation
%% Constant parameters
del_T=T./nColumn; % where n is the number of columns in binomial lattice
u=exp(sigma.*sqrt(del_T));
d=1./u;
p=(exp(r.*del_T)-d)./(u-d);
a=exp(-r.*del_T);
%% Initializing the lattice
Stree=zeros(nColumn+1,nColumn+1);
BLOV_lattice=zeros(nColumn+1,nColumn+1);
%% Developing the lattice
for i=0:nColumn
for j=0:i
Stree(j+1,i+1)=S0.*(u.^j)*(d.^(i-j));
end
end
for i=0:nColumn
BLOV_lattice(i+1,nColumn+1)=max(Stree(i+1,nColumn+1)-K,0);
end
for i=nColumn:-1:1
for j=0:i-1
BLOV_lattice(j+1,i)=a.*(((1-p).*BLOV_lattice(j+1,i+1))+(p.*BLOV_lattice(j+2,i+1)));
end
end
price=BLOV_lattice(1,1);
```
EDIT 2 (an additional question): If the binomial lattice is giving me option PV's, and my PV's are supposed to decrease with time, then why does more than half of values in my terminal nodes show an increase in values than what I start with (`=S0`). See the attached picture for values.
|
Few questions on Binomial-Lattice Option Valuation
|
CC BY-SA 2.5
| null |
2011-03-29T21:45:01.703
|
2011-04-02T05:08:23.923
|
2020-06-17T08:33:06.707
|
-1
| null |
[
"binomial-tree",
"fundamentals"
] |
846
|
2
| null |
845
|
1
| null |
Let's start with question (2). If you are not obtaining $S=1.5295e+009$ after backwardation, then you have a bug in your binomial tree code. You may wish to find and eliminate that before proceeding.
One simple check is to make all the terminal nodes have value 1.0. You should obtain that the initial node has value $e^{-rT}$. This assumes, of course, that you are using one of the better tree formulations that does not approximate the interest rate term. Also, check your valuations against one of the online american option pricer website.
Now, the reason you backwardate is, colloquially, that the tree is meant to represent option present-values under a particular set of assumptions and scenarios about how stock prices change. Note that during construction you effectively "forwardate" the stock prices $S$ on the tree (albeit in a trivial manner). Considering the knowledge you start out with for all these stock price scenarios, it is only at the terminal nodes of the tree that the option prices are clearly known with certainty.
The backwardation process is allowing you to form speculative values for the option in nodes / scenarios where you previously did not have a solid idea what the option value is. This whole business is hidden from you in the Black-Scholes formulas, but becomes more explicit in trees due to the need to account for early exercise in the scenarios.
There's a bunch of complicated stochastic calculus and dynamical programming theory behind why the trees you construct are a correct technique for handling the problem of option pricing, but the above should give you a basic idea.
| null |
CC BY-SA 2.5
| null |
2011-03-29T22:18:18.553
|
2011-03-29T22:18:18.553
| null | null |
254
| null |
847
|
2
| null |
821
|
0
| null |
One big limitation is that the BSM doesn't work on long term option pricing, see my blog below:
[http://value2get.blogspot.com/2011/03/why-doesnt-black-scholes-model-work-in.html](http://value2get.blogspot.com/2011/03/why-doesnt-black-scholes-model-work-in.html)
| null |
CC BY-SA 2.5
| null |
2011-03-30T00:36:51.497
|
2011-03-30T00:36:51.497
| null | null | null | null |
848
|
1
| null | null |
53
|
27582
|
Suppose I have two time series $X$ and $Y$ of stock prices. How do I measure the "similarity" of $X$ and $Y$?
(I'm being deliberately vague as I don't have a particular application, and I'm curious about different approaches in general. But I guess you can imagine that there's some stock x that I don't want to trade directly, for whatever reason, so I want to find a similar stock y to trade in its place.)
One method is to take a Pearson or Spearman correlation. To avoid problems of spurious correlation (since the price series likely contain trends), I should take these correlations on the differenced or returns series (which should be more stationary).
What are other similarity methods and their pros/cons?
|
Time-series similarity measures
|
CC BY-SA 2.5
| null |
2011-03-30T00:38:13.493
|
2016-03-02T20:18:48.713
|
2011-03-30T16:29:47.017
|
672
|
672
|
[
"time-series",
"correlation"
] |
849
|
2
| null |
147
|
4
| null |
I think you can look at books like "[The Evaluation and Optimization of Trading Strategies](http://rads.stackoverflow.com/amzn/click/0470128011)" for some more insight about developing testing strategies.
Also, don't forget that you, the strategy designer, knows the future and could potentially develop biased strategies. It's hard to unlearn something once you have learned.
| null |
CC BY-SA 2.5
| null |
2011-03-30T03:56:28.773
|
2011-03-30T03:56:28.773
| null | null |
511
| null |
850
|
2
| null |
848
|
29
| null |
I assume you're using returns (or log returns) [instead of actual stock prices](https://quant.stackexchange.com/q/489/35). In practice, you may also want to smooth the data by using a moving average.
There are several correlation coefficients:
- Pearson's $r$ - most commonly used definition of correlation:
\begin{equation}
r = \frac{\sigma_{x,y}}{\sigma_x \sigma_y}
\end{equation}
- Spearman's $\rho$ - uses the rank of each data set (array index if data had been sorted); less sensitive to outliers in the sample as it's non-parametric:
\begin{equation}
d_i = x_i - y_i
\end{equation}
\begin{equation}
\rho = 1 - \frac{6\Sigma d_{i}^{2}}{n(n^{2}-1)}
\end{equation}
- Kendall's $\tau$ - also based on ranking, but represents the probability that the two data sets are in the same order vs. the probability that they are in different orders:
\begin{equation}
\tau = \frac{C - D}{\frac{1}{2} n(n - 1)}
\end{equation}
- Kruskal's $\Gamma$ - similar to Kendall's, but acknowledges ties in the ordering:
\begin{equation}
\Gamma = \frac{C - D}{C + D}
\end{equation}
---
$C$ is the number of [concordant pairs](http://en.wikipedia.org/wiki/Concordant_pair) and $D$ is the number of discordant pairs.
| null |
CC BY-SA 2.5
| null |
2011-03-30T03:57:11.160
|
2011-03-30T04:02:17.753
|
2017-04-13T12:46:23.037
|
-1
|
35
| null |
851
|
2
| null |
764
|
2
| null |
I'm trying to understand what you are asking:
- you cannot use options directly, only buying stock
- thus, you want to use options as an indicator on whether or not you should buy a stock
I don't think the BS model is a great indicator for stock direction because it's based of not knowing where the stock may go. That said, a lot of people use action in the options market for suggestions about the stock movement.
One such indicator is the put/call ratio. Greater put buying suggest downside risk and call buying suggests upside movement.
You may want to also consider heavily traded strike prices as an indicator to what level the stock might trade to. If your stock is trading at 100 and there's heavy open interest at the 120 calls, the theory suggests that "smart money" knows something.
Happy trading.
| null |
CC BY-SA 2.5
| null |
2011-03-30T04:10:01.710
|
2011-03-30T04:10:01.710
| null | null |
511
| null |
852
|
1
|
856
| null |
7
|
1171
|
I am trying to get the historical price data on selected American and European style options at EUREX. I am not familiar with their system. Does any one know whether they have something like yahoo finance where I can just download market data through R? Or at least, excel files with daily prices.
Thank you
|
Need historical prices of EUREX American and European style options
|
CC BY-SA 2.5
| null |
2011-03-30T05:45:14.470
|
2015-08-03T01:25:40.997
|
2011-03-30T12:18:35.017
|
12
|
428
|
[
"options",
"eurex"
] |
853
|
2
| null |
841
|
8
| null |
I do not know such a software - but we can think about the code. There are tow points which you have to define properly:
- which assets (correspondently, payoffs) are you allowed to replicate the complicated option?
- as barrycarter has already asked - what should be the form of the input?
Further procedure should be quite easy. You are trying to find a linear combination $\lambda$ of basic assets $s_1,s2_,...$ (because in practice this is the only possibility for you to "combine" it) which fits the complex payoff $\gamma$. It's just a peace-wise affine optimization problem. Once you minimize the difference $|\lambda - \gamma|$ you have either zero (so you have found the replication formula) or smth greater than zero (which means that there is no replication formula which perfectly covers this complicated payoff).
Once you will determine the points I've mentioned - I believe I will be able to help you to solve this problem.
Edited: Let us call your payoff $P(S)$ and simple payoff functions are $P_1(S,\theta_1),P_2(S,\theta_2),...$, where $\theta$ are parameters, e.g. strike for Call or Put.
Then you would like to check if there exist $a_1,a_2,...$ such that
$$
P(S) = \sum\limits_i a_i P_i(S,\theta_i).
$$
You can solve this problem by defining
$$
J(a,\theta) = ||P(\cdot) - \sum\limits_i a_i P_i(\cdot,\theta_i)||
$$
where you can use any norm - and in fact due to the structure of payoffs, this norm should be defined only on some finite interval $[0,S']$. Then you solve
$$
J(a,\theta)\to \min
$$
and if the extremum value is $0$ - you can cover your exotic payoff with simple ones, if non-zero - you cannot cover it perfectly, but the obtained values of $a,\theta$ will be optimal.
If you need more details about the solution of optimization problem -just tell me.
P.S. I think the paper you have refereed to is not correct - the payoff is not peace-wise affine while they plot it (and considered it) as peace-wise affine function.
| null |
CC BY-SA 2.5
| null |
2011-03-30T06:15:04.317
|
2011-03-31T15:17:17.183
|
2011-03-31T15:17:17.183
|
464
|
464
| null |
854
|
2
| null |
848
|
6
| null |
You can look at cointegration.
| null |
CC BY-SA 2.5
| null |
2011-03-30T07:37:37.430
|
2011-03-30T07:37:37.430
| null | null |
155
| null |
855
|
2
| null |
834
|
0
| null |
It seems like everyone here is MC but you can use PDE methods as well.
Anyway there is two things that you can usually check, the Price and ... the Hedge (or replication price).
Let's look at the first case:
- If you have closed-form formulas (as is usually the case in the BS "fantas(ma)tic-wishfull thinking"-setting), then that's all you need. If not, then you are not comfortable with your math (or your model but this is another issue).
- If you don't have such an analytical solution at hand, then usually MC comes in naturally (as every one suggests here) but you could use PDE methods aswell (after all it was the original methods for derivation of the BS Call/Put options prices). And you have plenty of books and nice articles that will tell you how to proceed in both cases (especially in BS settings). An easy check that I recommand, is to compare the "closed-form formulas" vs "MC (and/or PDE)" in the vanilla cases. Moreover those methods provide a good introdution to the replication prices that you might be willing to check in the second case.
Note that by using finite difference (or element) methods for the PDE you get an error and that when using discretization sheme for SDE you get at the of the day "random variable" for your P&L. There it is really a matter of taste in my opinion both methods have pros and cons.
For the second case, that I called replication price then it is usually provided in a (at least in principle) straigthforward manner of the methods you used for the PDE and/or SDE discretization.
Still regarding the replication prices, the very recent article of Wilmott and Ahmad "[Which Free Lunch Would You Like Today, Sir?: Delta Hedging, Volatility Arbitrage and Optimal Portfolios](http://www.wilmott.com/detail.cfm?articleID=353)" is really illuminating in many ways and stays in the BS setting you want to stay within I think you should read it.
| null |
CC BY-SA 2.5
| null |
2011-03-30T08:24:07.623
|
2011-03-31T17:54:17.040
|
2011-03-31T17:54:17.040
|
35
|
92
| null |
856
|
2
| null |
852
|
3
| null |
Unfortunately the only possibility to get data from the biggest option exchange of the world is to buy these from EUREX directly (or some other professional data provider) - and they are quite expensive.
More infos can be found here (and some sample files):
[http://www.eurexchange.com/market/historical_data_en.html](http://www.eurexchange.com/market/historical_data_en.html)
| null |
CC BY-SA 2.5
| null |
2011-03-30T12:22:07.807
|
2011-03-30T12:22:07.807
| null | null |
12
| null |
857
|
1
| null | null |
6
|
573
|
I have one portfolio with high beta stocks, and one with low beta stocks. Is it better to have higher expected return with high volatility, or medium expected return with medium volatility? (All from a asset allocation, efficient frontier, risk/reward prospective.)
|
Given two portfolios with identical correlation matrices, which one will have a better risk/reward ratio?
|
CC BY-SA 2.5
| null |
2011-03-30T14:43:57.497
|
2013-01-05T02:05:13.540
|
2011-03-30T14:53:37.207
|
35
|
520
|
[
"beta",
"modern-portfolio-theory"
] |
858
|
2
| null |
857
|
5
| null |
Most portfolio managers look at the [Sharpe ratio](http://en.wikipedia.org/wiki/Sharpe_ratio), or occasionally the [Treynor ratio](http://en.wikipedia.org/wiki/Treynor_ratio). In general, you want to maximize one of the these metrics, though there could be other issues that you haven't currently considered, like turnover or transaction costs associated with obtaining the portfolio.
| null |
CC BY-SA 2.5
| null |
2011-03-30T14:52:14.523
|
2011-03-30T14:52:14.523
| null | null |
35
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.