Id
stringlengths
1
5
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
5
ParentId
stringlengths
1
5
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
27.8k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
5
OwnerUserId
stringlengths
1
5
Tags
sequence
1
1
null
null
26
5651
To get the ball rolling... I will answer this question this evening For people aware & unaware I think it would be a great way to introduce the group, resources for fundamental knowledge & concepts.
What are some good technical and non-technical books for a math lover to get in to quantitative analysis?
CC BY-SA 2.5
null
2011-01-31T21:02:03.567
2011-01-31T21:39:02.880
2011-01-31T21:13:15.943
23
6
[ "learning", "finance", "books", "quantitative", "analysis" ]
2
2
null
1
6
null
I like Statistics and Data Analysis for Financial Engineering by David Ruppert (http://www.amazon.com/gp/product/1441977864/ref=oss_product)
null
CC BY-SA 2.5
null
2011-01-31T21:05:44.007
2011-01-31T21:05:44.007
null
null
33
null
3
1
null
null
12
1800
I want to start learning quantitative finance, what articles or blogs should I look at to start? Also see the Related Question on [Quantitative Finance Books](https://quant.stackexchange.com/questions/1/what-are-some-good-technical-and-non-technical-books-for-a-math-lover-to-get-in-t)
What blogs or articles online should I read to get started with quantitative finance?
CC BY-SA 2.5
null
2011-01-31T21:07:18.193
2011-01-31T21:53:08.947
2017-04-13T12:46:23.000
-1
27
[ "learning", "finance" ]
4
2
null
1
5
null
John C. Hull's ["Options, Futures, and Other Derivatives"](http://www.rotman.utoronto.ca/~hull/ofod/) is the mostly widely recognized introductory book for derivatives valuation.
null
CC BY-SA 2.5
null
2011-01-31T21:08:35.820
2011-01-31T21:08:35.820
null
null
17
null
5
1
120
null
19
899
How do you model concentration risk of credit portfolio in IRB/Basel II framework?
Concentration risk in credit portfolio
CC BY-SA 2.5
null
2011-01-31T21:08:50.550
2014-07-16T15:52:43.620
2011-02-01T16:53:40.297
40
40
[ "risk", "credit" ]
6
2
null
1
5
null
This may be too basic a book for what you're hungering for. In preparation for the Financial Engineering actuarial exam, I'm studying from Derivative Markets by McDonald. It's very technical, but gives a great introduction to the mathematics behind pricing options and even goes into depth on Brownian motion. Check it out here: [http://amzn.to/g3QOES](http://amzn.to/g3QOES).
null
CC BY-SA 2.5
null
2011-01-31T21:09:01.193
2011-01-31T21:09:01.193
null
null
30
null
8
2
null
1
5
null
I like the following book (though have only very briefly skimmed it): - Optimization methods in finance
null
CC BY-SA 2.5
null
2011-01-31T21:09:30.940
2011-01-31T21:09:30.940
null
null
29
null
9
1
16
null
17
2249
It seems that VIX futures could be a great hedge for a long-only stock portfolio since they rise when stocks fall. But how many VIX futures should I buy to hedge my portfolio, and which futures expiration should I use?
Hedging stocks with VIX futures
CC BY-SA 2.5
null
2011-01-31T21:09:36.020
2019-11-23T17:55:18.170
null
null
37
[ "equities", "vix", "hedging" ]
10
2
null
1
20
null
Clark, This is one of the popular questions we have on our community when someone new to the field come in and ask where they should start. We point them all to the list we have gathered which is now one of the most comprehensive list for quant finance [http://www.quantnet.com/master-reading-list-for-quants/](http://www.quantnet.com/master-reading-list-for-quants/)
null
CC BY-SA 2.5
null
2011-01-31T21:10:16.820
2011-01-31T21:10:16.820
null
null
43
null
11
2
null
1
6
null
[Options, Futures, and Other Derivatives](http://rads.stackoverflow.com/amzn/click/0132164949) [Analysis of Financial Time Series](http://rads.stackoverflow.com/amzn/click/0470414359) [Inside the Black Box: The Simple Truth About Quantitative Trading](http://rads.stackoverflow.com/amzn/click/0470432063) [Trading and Exchanges: Market Microstructure for Practitioners](http://rads.stackoverflow.com/amzn/click/0195144708)
null
CC BY-SA 2.5
null
2011-01-31T21:10:50.130
2011-01-31T21:10:50.130
null
null
35
null
12
2
null
1
6
null
One that I found via google that seems promising (for beginners though) is. [Numerical methods in finance and Economics](http://books.google.com/books?id=litw2D3bLwMC&lpg=PP1&dq=optimization%20methods%20finance&pg=PP1#v=onepage&q=optimization%20methods%20finance&f=false)
null
CC BY-SA 2.5
null
2011-01-31T21:10:54.370
2011-01-31T21:10:54.370
null
null
29
null
13
2
null
3
3
null
Mark Joshi's advice - [http://www.markjoshi.com/downloads/advice.pdf](http://www.markjoshi.com/downloads/advice.pdf) (On becoming a quant) It's quite useful to get some insight into sorts of quantitative analysts, their repsonsibilites, type of companies, requirements for interviews etc ... just a great article to begin with.
null
CC BY-SA 2.5
null
2011-01-31T21:11:11.057
2011-01-31T21:11:11.057
null
null
15
null
14
1
34
null
21
18850
At my work I often see option prices or vols quoted against deltas rather than against strikes. For example for March 2013 Zinc options I might see 5 quotes available for deltas as follows: ``` ZINC MARCH 2013 OPTION DELTA -10 POINTS ZINC MARCH 2013 OPTION DELTA -25 POINTS ZINC MARCH 2013 OPTION DELTA +10 POINTS ZINC MARCH 2013 OPTION DELTA +25 POINTS ZINC MARCH 2013 OPTION DELTA +50 POINTS ``` The same 5 values -10, -25, +10, +25, + 50 are always used. What are these deltas? What are the units? Why are those 5 values chosen?
What is the "delta" option quoting convention about?
CC BY-SA 2.5
0
2011-01-31T21:13:06.893
2016-08-10T07:54:14.220
2011-01-31T21:32:40.543
70
39
[ "options", "greeks" ]
15
2
null
3
9
null
Second Joshi guide but yout you can do better than that. We have a list for all level, some of them are free to download (just like Joshi), others are books and websites that for beginner level [http://www.quantnet.com/master-reading-list-for-quants/](http://www.quantnet.com/master-reading-list-for-quants/) As for websites and blogs, there are only a handful of them out there (this is a niche field after all). I run [http://www.quantnet.com](http://www.quantnet.com) and I'm a member on [http://wilmott.com](http://wilmott.com) as well as [http://nuclearphynance.com](http://nuclearphynance.com). The Quant Finance group on LinkedIn is another popular destination. Every site has a different flavor and attracts a fairly different audience. You have to sample them all, see the kind of questions asked there, the kind of members and how they answer. WARNING: some sites are not very newbies friendly as they carter to the working professionals. As I said, it will take time to learn the in and out of each site.
null
CC BY-SA 2.5
null
2011-01-31T21:13:09.483
2011-01-31T21:33:51.200
2011-01-31T21:33:51.200
43
43
null
16
2
null
9
20
null
VIX measures volatility. It doesn't always go up if stocks go down.
null
CC BY-SA 2.5
null
2011-01-31T21:16:23.517
2011-01-31T21:16:23.517
null
null
null
null
17
2
null
1
6
null
My type is "[An introduction to the mathematics of financial derivatives](http://books.google.com/books?id=mWzMpAex1_kC)" by Salih N. Neftci. Though it's definitely harder to digest than Hull.
null
CC BY-SA 2.5
null
2011-01-31T21:17:03.017
2011-01-31T21:17:03.017
null
null
38
null
18
1
null
null
30
17342
There are [three different commonly used Value at Risk (VaR) methods](http://www.investopedia.com/articles/04/092904.asp): - Historical method - Variance-Covariance Method - Monte Carlo What is the difference between these approaches, and under what circumstances should each be used?
What is the difference between the methods for calculating VaR?
CC BY-SA 2.5
null
2011-01-31T21:17:48.897
2017-10-05T00:02:50.300
null
null
17
[ "risk", "value-at-risk" ]
19
2
null
3
5
null
[Wilmott](http://www.wilmott.com/) and [NuclearPhynance](http://www.nuclearphynance.com/) are two fairly popular forums, although [quant.stackexchange.com](http://quant.stackexchange.com) will hopefully serve as a better resource in the future.
null
CC BY-SA 2.5
null
2011-01-31T21:20:04.807
2011-01-31T21:20:04.807
null
null
17
null
20
1
62
null
20
3172
When using time-series analysis to forecast some type of value, what types of error analysis are worth considering when trying to determine which models are appropriate. One of the big issues that may arise is that successive residuals between the 'forecast' and the 'realized' value of the variable may not be properly independent of one another as large amounts of data will be reused from one data point to its successive one. To give an example, if you fit a GARCH model to forecast volatility for a given time horizon, the fit will use a certain amount of data, and the forecast is produced and then compared to whatever the realized volatility was observed for the given period of time, and it is then possible to find some kind of 'loss' value for that forecast. Once everything moves forward a time period, assuming we refit (but even if we reuse the data parameters), there will be a very large overlap in all the data for this second forecast and realized volatility. Since it is common to desire a model that minimises these 'losses' in some sense, how do you deal with the residuals produced in this way? Is there a way to remove the dependency? Are successive residuals dependent, and how could this dependency be measured? What tools exist to analyse these residuals?
What type of analysis is appropriate for assessing the performance time-series forecasts?
CC BY-SA 2.5
null
2011-01-31T21:25:28.170
2016-10-26T07:39:48.443
null
null
61
[ "forecasting" ]
23
2
null
3
2
null
[Bionic Turtle's](http://www.bionicturtle.com/forum/) forums aren't bad. Some of it is aimed at the FRM (Financial Risk Manager) exam, but there's also a [section dedicated to Quant Finance](http://www.bionicturtle.com/forum/viewforum/5/).
null
CC BY-SA 2.5
null
2011-01-31T21:51:01.163
2011-01-31T21:51:01.163
null
null
79
null
24
2
null
3
4
null
I like this site [http://quant.ly/](http://quant.ly/)
null
CC BY-SA 2.5
null
2011-01-31T21:53:08.947
2011-01-31T21:53:08.947
null
null
82
null
25
1
28
null
49
8640
According to the [Wikipedia article](http://en.wikipedia.org/wiki/Option_(finance)#Historical_uses_of_options), > Contracts similar to options are believed to have been used since ancient times. In London, puts and "refusals" (calls) first became well-known trading instruments in the 1690s during the reign of William and Mary. Privileges were options sold over the counter in nineteenth century America, with both puts and calls on shares offered by specialized dealers. But how had traders actually priced the simplest options before the Black–Merton-Scholes model became common knowledge? What were the most popular methods to determine arbitrage-free prices before 1973?
Option pricing before Black-Scholes
CC BY-SA 2.5
null
2011-01-31T21:56:54.423
2020-04-02T11:54:56.323
2020-06-17T08:33:06.707
-1
70
[ "options", "history", "black-scholes" ]
26
1
77
null
27
12612
Are there any empirical observations or practices when to prefer Local Volatility Model for pricing over Stochastic Model or vice versa?
Local Volatility vs. Stochastic Volatility
CC BY-SA 2.5
null
2011-01-31T21:56:55.617
2021-04-25T01:34:28.240
null
null
15
[ "local-volatility", "stochastic-volatility" ]
27
1
58
null
20
15585
When looking at option chains, I often notice that the (broker calculated) implied volatility has an inverse relation to the strike price. This seems true both for calls and puts. As a current example, I could point at the SPY calls for MAR31'11 : the 117 strike has 19.62% implied volatility, which decreaseses quite steadily until the 139 strike that has just 11.96%. (SPY is now at 128.65) My intuition would be that volatility is a property of the underlying, and should therefore be roughly the same regardless of strike price. Is this inverse relation expected behaviour? What forces would cause it, and what does it mean? Having no idea how my broker calculates implied volatility, could it be the result of them using alternative (wrong?) inputs for calculation parameters like interest rate or dividends?
Why does implied volatility show an inverse relation with strike price when examining option chains?
CC BY-SA 2.5
null
2011-01-31T21:57:05.983
2019-08-27T13:22:09.873
null
null
53
[ "options", "implied-volatility", "option-pricing" ]
28
2
null
25
23
null
You may want to look at Chapter 5 - "The Quest for the Option Formula" from the [Derivatives](http://web.archive.org/web/20101026225940/http://www.thederivativesbook.com/contents.html) book. The book is available online for free and it has a very decent review of approaches that were used 20-30 years before the Black-Scholes-Merton equation.
null
CC BY-SA 3.0
null
2011-01-31T22:02:11.603
2015-05-18T08:59:43.173
2015-05-18T08:59:43.173
15485
15
null
29
2
null
25
12
null
I think this slightly misses the point. Before Black-Scholes options prices were set entirely by human judgement, just like prices in many other markets are set, which is why this model was so important. Peter Bernstein has a good recollection of this kind of behavior in ["Capital Ideas"](http://rads.stackoverflow.com/amzn/click/0029030129).
null
CC BY-SA 2.5
null
2011-01-31T22:06:18.477
2011-01-31T22:06:18.477
null
null
17
null
30
1
1131
null
8
1160
There are few methods like Copeland-Antikarov, Herath-Park, Cobb-Charnes etc. to compute project volatility, however these methods compute upward biased volatility. What is the best method I could use to compute project volatility for real option valuation?
What is the best method to compute project volatility in Real Option Valuation?
CC BY-SA 2.5
null
2011-01-31T22:18:28.133
2014-02-13T18:59:46.427
2014-02-13T18:59:46.427
null
null
[ "options", "volatility", "real-options" ]
31
2
null
25
8
null
To add on to what others have said: the formula still does not provide a price -- just a way to calculate "implied" volatility. The BSM calculates a hypothetical value (using binary branchings as the storytelling tool) and this hypothetical merely provides a reference for this common "what-if" question. The only sense in which "arbitrage free" entered the minds of traders before BSM was "Am I being cheated?" One more thing to add: the volume of derivatives trading exploded after the BSM formula, so there wasn't that much derivatives trading going on beforehand.
null
CC BY-SA 2.5
null
2011-01-31T22:31:35.177
2011-01-31T22:31:35.177
null
null
104
null
32
2
null
9
12
null
VIX also has a lot of complexities that make it a less-than-ideal hedging tool if you're buying a VIX ETF. [http://vixandmore.blogspot.com/](http://vixandmore.blogspot.com/) goes into it at length and can probably also answer any questions you have about the VIX as a hedge. To expand on what @barrycarter said, the VIX is better as a hedge against kurtosis, not against downward movements.
null
CC BY-SA 2.5
null
2011-01-31T22:35:53.580
2011-01-31T22:35:53.580
null
null
104
null
33
2
null
18
14
null
There are many advantages and flaws to each quoted method by Shane(presuming that I understand them properly), the first one has the big main advantage that it doesn't need any evaluation of probability law, it is just some kind of evolved scenario re-playing "as of" today using the history of (usually) one day market evolutions over one or two year. So once you know how to evaluate your protfolio (no matter how complex it is) you have something that allows you to mechanically know your quantile and so your VaR. The problem with the method is that there are manny econometrics hypothesis that are actually hard to test for this kind of method to be trustworthy. Second "Variance-Covariance" I think that Shane means by this that risk factors returns obey multinomial normally distributed laws (or log-normally), once again the upside of method is its simlpicity once you econometrically estimate your VarCovar matrix. The truth is that unfortunately one day returns exhibit fat tails and asymmetry so those kind of laws aren't the best, this can though be partially overcome by using some other multinomial laws that are elliptic. Another thing you should keep in mind is that when you have a VaR indicator and a profolio that isn't rebalanced over a day, and when market becomes excited, then you expect (and your management) your VaR to go up even though nothing much has happened in your econometric estimation, so what you really want is then not a VaR which is unconditionnal but some conditionnal VaR and then you resort to some kind of overweighing the recent observations versus the old one (like EWMA methods). Then you get some indicator that is really "alive" instead of some indicator that takes a long time to adjust to market conditions. Let me now explain why I used the elliptic laws (multinomial Gaussain are part of this class), this is because with this assumption (If I remember well) then VaR requalifies as a true Risk Measure as defined by Artzner et al. (which is not true for genral probability laws has it lacks the subadditivity axiom). Another point, is when you have nonlinear positions in your protfolio such as options then what is usually done (and can be really wrong if not given sufficient attention) is that those position are Taylor derived at order one with respect to risk factors and then Linear VaR is calculated. You can extended your approximation to superior orders but if you have a lot of risk factors then the second derivative estimations are already an achievement, so what you do is that you stay linear and estimate periodically what is the error when quadratic terms are taken into account (or resort to MC methods). Finally, MC methods then they are efficient methods but very time consumming as you are trying to evaluate a Quantile id est a rare event and you then need to get a very large number of simulation to get something good, moreover it is not always clear which law is to be simulated. In particular in portfolios with derivatives, because it is quite tempting tu use Risk Neutral measure in your simulations but they have two differences over historical estimates, first the underlying model is usually calibrated to model risk factors over large period of times when you are only trying to get estimates over the next day so the underlying measures have different purposes. And second as you may know in theory the difference between Historical and Risk Neutral measures are hidden "in the drift" of the risk factor dynamics (well this is not true unless complete market is assumed but let's go with it) and over a day you can discard this difference with respect to the diffusion term which should be the same for both measures, it happens that almost always Risk Neutral Volatility (i.e. marekt calibrated volatilities) are higher than historical one (or realized ones). Well here are my two cents Best Regards
null
CC BY-SA 2.5
null
2011-01-31T22:38:14.193
2011-01-31T22:38:14.193
null
null
92
null
34
2
null
14
14
null
Delta is the partial derivative of the value of the option with respect to the value of the underlying asset. An option with a delta of 0.5 (here listed as +50 points) goes up \$0.50 if the underlying asset goes up \$1.00. Or goes down \$0.50 if the underlying asset goes down \$1.00. Keep in mind that delta is an instantaneous derivative, so the value will change both in time (charm is the change in delta with time) and with changes in value of the underlying asset (gamma is the change in delta with the underlying asset, which is also the second partial derivative of the option value with respect to the underlying asset value). The actual delta is a little different from +50, +25, and so on, but they're close enough. I am sure that you can find the real delta values. I guess they're listed like this because if you're hedging a portfolio you really care about the delta, not the strike. E.G., if I only wanted to delta hedge, and owned one share, I could buy two -50 delta puts, which sum to a delta of zero. The wikipedia page on [Greeks](http://en.wikipedia.org/wiki/Greeks_%28finance%29) is a pretty good starting point.
null
CC BY-SA 2.5
null
2011-01-31T22:41:05.343
2011-01-31T22:41:05.343
null
null
106
null
35
1
193
null
17
2593
Consider the following strategies: - a stat arb strategy with no overnight exposure, but significant market exposure intraday. - a market timing model which is always long or short the market. - etc is it meaningful to consider the betas of strategies like these? Or should we ignore beta when the portfolio returns have low (near zero) correlation to market returns? how do you use beta?
is beta of a portfolio always meaningful?
CC BY-SA 2.5
null
2011-01-31T23:07:33.457
2011-02-09T17:29:51.550
null
null
108
[ "hedging", "beta", "portfolio" ]
36
1
1835
null
17
1048
Assume we decompose the daily (log) returns of a stock as beta times market movement plus an idiosyncratic part. If this is done ex-ante, what proportion of the variance is explained by the market component vs. the idiosyncratic part? I am looking more for rule of thumb/experience of others, based on, say, U.S. equities in the S&P 1500. Another way of putting this is "what is the (average/rule of thumb/expected/garden-variety) Pearson correlation coefficient between the return of a stock and the return of the market?"
Approximately what proportion of a stock’s volatility is explained by market movement?
CC BY-SA 2.5
null
2011-01-31T23:09:04.517
2011-09-15T17:39:25.157
null
null
108
[ "beta", "variance", "correlation", "market" ]
37
1
null
null
43
11122
I'm currently using IB's Java API and getting feeds through them. However the real-time feed is updated only every 250ms and the historical feed only every second. I'm primarily looking for ES data and other index futures and ETFs. I'm not looking at FX since that data is the most subjective since there is no exchange. I want a setup that allows me to get the most accurate real-time tick data and market depth. Tick by tick would be ideal, but probably prohibitively expensive. What setup gives the most accurate data and depth?
What broker/feed/APIsetup allows for recording the most accurate data (cheaply)?
CC BY-SA 3.0
null
2011-01-31T23:13:33.573
2014-11-04T12:48:39.167
2013-04-04T14:16:46.253
1652
8
[ "backtesting", "data", "high-frequency" ]
38
1
444
null
15
1175
What is the theoretical/mathematical basis for the valuation of [C]MBS and other structured finance products? Is the methodology mostly consistent across different products?
How are prices calculated for commercial/residential mortgage-backed securities?
CC BY-SA 2.5
0
2011-01-31T23:45:21.927
2015-10-20T20:21:35.473
2011-01-31T23:57:54.333
117
117
[ "mbs", "valuation", "structured-finance" ]
39
2
null
35
10
null
It partly depends on the use case. If one is taking multiple strategies and assembling a portfolio that includes multiple different strategies and is mixing this with a heavy weighting to an equity index, then this might be a useful measure. [Zero or negative beta](http://en.wikipedia.org/wiki/Beta_%28finance%29) does have meaning, in the same way that correlation has meaning. In the more traditional usage of CAPM, a better question might be "which beta to use" in this context. It's still meaningful (in so far as any measure of beta is meaningful) but an active strategy or one which trades different assets should may not be compared to a beta of a long-only stock portfolio. Regarding the specific examples that you give: there are various hedge fund indices which cover statistical arbitrage and long/short equities. These may or may not be good benchmarks, but they are more likely to be useful than a standard equity market index. Aswath Damodaran has [a blog post on this](http://aswathdamodaran.blogspot.com/2009/02/can-betas-be-negative-and-other-well.html) that summarizes some of the issues. Although I really think that it's important to reflect on what $\beta$ is supposed to measure -- market risk -- and ask yourself whether you're definition of "market" is appropriate for the context.
null
CC BY-SA 2.5
null
2011-01-31T23:49:03.780
2011-02-01T00:15:29.577
2011-02-01T00:15:29.577
17
17
null
40
2
null
25
15
null
Options and futures were common instruments in France at the end of the 19th century. Louis Bachelier, [in his 1900 thesis](http://archive.numdam.org/article/ASENS_1900_3_17__21_0.pdf), derives the price of a European option when the underlying asset is normally distributed. Interestingly, he seems to have some strong opinions about mathematical finance in his introduction to his thesis: > The calculus of probabilities will undoubtedly never be applied to the movement of stock prices, and the dynamics of the stockmarket will never be an exact science.
null
CC BY-SA 2.5
null
2011-01-31T23:57:46.457
2011-01-31T23:57:46.457
null
null
114
null
41
1
54
null
21
5096
It appears that the log 'returns' of the VIX index have a (negative) correlation to the log 'returns' of e.g. the S&P 500 index. The r-squared is on the order of 0.7. I thought VIX was supposed to be a measure of volatility, both up and down. However, it appears to spike up when the market spikes down, and has a fairly strong negative correlation to the market (in fact, the correlation is much stronger than e.g. for your garden variety tech stock). Can anyone explain why? The mean 'returns' of both indices are accounted for in this correlation, so this is not a result of the expectation of the market to increase at ~7% p.a. Is there a more pure volatility index or instrument?
Why does the VIX index have *any* correlation to the market?
CC BY-SA 2.5
null
2011-01-31T23:59:50.293
2019-08-14T22:11:46.297
null
null
108
[ "vix", "volatility", "correlation", "market" ]
42
2
null
37
7
null
The [T4 API ](http://www.ctsfutures.com/t4_api.aspx) is free to try for two weeks and it has really good [documentation](http://sim.t4login.com/help/api/)... if you contact their support they will usually extend your trial period as long as you want. Here are some of the features it has: - Real-time market feed (ticks). - Historical tick data (as well as second, minute, hour and day charts). - VB/C#/C++ interface. - Programming examples that demonstrate the full functionality of the API. Unfortunately T4 doesn't have any equities, only futures and currencies.
null
CC BY-SA 2.5
null
2011-02-01T00:02:55.453
2011-02-01T06:20:36.470
2011-02-01T06:20:36.470
78
78
null
43
1
277
null
27
14362
Is there a standard model for market impact? I am interested in the case of high-volume equities sold in the US, during market hours, but would be interested in any pointers.
Is there a standard model for market impact?
CC BY-SA 2.5
null
2011-02-01T00:03:23.143
2021-05-30T17:18:47.623
null
null
108
[ "market-impact", "models" ]
44
1
73
null
31
6468
One of the main problems when trying to apply mean-variance portfolio optimization in practice is its high input sensitivity. As can be seen in (Chopra, 1993) using historical values to estimate returns expected in the future is a no-go, as the whole process tends to become error maximization rather than portfolio optimization. > The primary emphasis should be on obtaining superior estimates of means, followed by good estimates of variances. In that case, what techniques do you use to improve those estimates? Numerous methods can be found in the literature, but I'm interested in what's more widely adopted from a practical standpoint. Are there some popular approaches being used in the industry other than [Black-Litterman](http://www.blacklitterman.org/) model? --- Reference: Chopra, V. K. & Ziemba, W. T. [The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice](http://faculty.fuqua.duke.edu/~charvey/Teaching/BA453_2006/Chopra_The_effect_of_1993.pdf). Journal of Portfolio Management, 19: 6-11, 1993.
What methods do you use to improve expected return estimates when constructing a portfolio in a mean-variance framework?
CC BY-SA 2.5
null
2011-02-01T00:08:54.067
2013-12-13T09:51:59.337
null
null
38
[ "modern-portfolio-theory", "mean-variance", "expected-return", "estimation" ]
45
1
53
null
27
3218
A potential issue with automated trading systems, that are based on Machine Learning (ML) and/or Artificial Intelligence (AI), is the difficulty of assessing the risk of a trade. An ML/AI algorithm may analyze thousands of parameters in order to come up with a trading decision and applying standard risk management practices might interfere with the algorithms by overriding the algorithm's decision. What are some basic methodologies for applying risk management to ML/AI-based automated trading systems without hampering the decision of the underlying algorithm(s)? Update: An example system would be: Genetic Programming algorithm that produces trading agents. The most profitable agent in the population is used to produce a short/long signal (usually without a confidence interval).
How are risk management practices applied to ML/AI-based automated trading systems
CC BY-SA 2.5
null
2011-02-01T00:16:50.123
2012-04-17T12:02:37.053
2011-02-01T00:45:46.430
78
78
[ "risk", "algorithmic-trading", "risk-management", "best-practices", "trading" ]
46
2
null
43
10
null
I don't believe that there is a "standard" model (per se); in fact, there are many considerations around market impact models, so you would need to be more specific. At the most basic level, you might define market as $P_{first fill} - P_{last fill}$ once your order in actually in the order book (e.g. not including other costs like "opportunity cost"). This doesn't take into account any other trades that may be taking place at the same time or other events that might be impacting the price beyond your order. It doesn't help you to forecast market impact on an impending order (which would require some knowledge of time of day, volume, volatility, etc.). That being said, I would certainly recommend reading ["Optimal Trading Strategies"](http://rads.stackoverflow.com/amzn/click/0814407242) (Kissell, Glantz 2003) which gives a good overview (in addition to covering other transaction cost subjects).
null
CC BY-SA 3.0
null
2011-02-01T00:18:43.927
2017-08-29T08:25:20.103
2017-08-29T08:25:20.103
6283
17
null
47
1
49
null
17
8117
I want to create a lognormal distribution of future stock prices. Using a monte carlo simulation I came up with the standard deviation as being $\sqrt{(days/252)}$ $*volatility*mean*$ $\log(mean)$. Is this correct?
How to calculate future distribution of price using volatility?
CC BY-SA 2.5
null
2011-02-01T00:29:55.100
2011-10-02T13:09:50.310
2011-02-03T16:40:52.230
17
123
[ "volatility", "lognormal" ]
48
2
null
41
1
null
Markets seem to have a bias against being bearish. Lower stock prices are perceived as more risky, and as risk increases so does implied volatility. For example, as the market decreases there will generally be more demand for puts, causing higher prices and higher implied volatility. An up market will imply less volatility for the same reason.
null
CC BY-SA 2.5
null
2011-02-01T00:32:50.263
2011-02-01T00:32:50.263
null
null
117
null
49
2
null
47
6
null
I'm not sure I understand, but if you want to compute the variance of $exp(X)$, where $X$ is normally distributed with mean $\mu$ and variance $\sigma^2$, that variance is (from [Wikipedia](http://en.wikipedia.org/wiki/Lognormal)): $$\left(\exp{(\sigma^2)} - 1\right) \exp{(2\mu + \sigma^2)}$$
null
CC BY-SA 2.5
null
2011-02-01T00:37:59.870
2011-02-01T00:37:59.870
null
null
108
null
50
2
null
44
9
null
You raise a very important point, which unfortunately doesn't have a simple answer. Black-Litterman addresses the allocation problem by allowing you to provide a prior within a bayesian framework. It doesn't really tell you how to produce the prior itself. But more importantly, it doesn't address the fundamental problem: it's difficult to accurately predict expected returns. So, you can improve this by having a better model to predict the expect returns besides assuming a static, simple linear model ("this was the mean return over the last $n$ years"). But improving it is the big challenge in finance in general. And standard textbook models haven't done too much to improve the situation; the most success in time series modeling has been around volatility prediction (e.g. with some of the GARCH models), which addresses the variance part of the problem. But ARIMA and other time series models have mixed success success when trying to predict returns for financial assets.
null
CC BY-SA 2.5
null
2011-02-01T00:40:27.277
2011-02-01T00:40:27.277
null
null
17
null
51
2
null
36
5
null
I just want to give a qualitative assessment to your question: Volatility of a market is different than the volatility of a stock. Similarly like Copeland and Antikarov (2001) say that "...the volatility of a gold mine is different than the volatility of a gold..." If you want to quantitatively compute the percentage of a stock's volatility affected by market's volatility, then I would do back calculation using options formula.
null
CC BY-SA 2.5
null
2011-02-01T00:46:06.640
2011-02-01T03:31:53.873
2011-02-01T03:31:53.873
null
null
null
52
2
null
25
21
null
The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble. ~ Ralph Waldo Emerson ~ Black-Scholes made it possible for an idiot with a calculator to imagine that he was smart enough to judge the value of options ... it has always been possible to determine option value -- Black-Scholes is not necessarily the best approach. In his preface to the sixth edition of Security Analysis, noted value investor Seth Klarman discusses how Graham and Dodd's methods mentioned in the original edition (copyright 1934) are more relevant than ever. Specifically, Mr. Klarman suggests how he has successfully applied those methods. Mr. Klarman is relatively-famous for his use of option in an investment strategy has been been significantly more profitable than average. Mr. Klarman has taken advantage of less well-informed investors who falsely pride themselves on their mathematical sophistication and familiarity with formulas like Black Scholes. Those investors "plug and chug" their numbers into their quantitative models [without having a clue about the fundamentals driving the probability distribution upon which the mathematics of the formula are based], receive their below average returns and never realize that the problem when almost everyone is using Black-Scholes, the market is ripe for a successful contrarian anti-Black-Scholes strategy. An option pricing strategy based upon value investing precepts would be driven by a rigorous analysis of downside potential and a similar analysis of upside potential. The rigor necessary for understanding the sensitivity of the stock to various scenarios would inform the judgement necessary to determine the likelihood of those different scenarios. Option pricing based upon Graham and Dodd's methods would not be driven by irrationally sanguine market-driven estimates of either implied or historical volatility ... it would reflect actual risks faced by the underlying assets, not the market assessment of those risks ... as you will recall, value investing is based upon the premise that the market-based assessments are frequently very, very wrong. Anyone seeking additional insights into Seth Klarman's methodoloy, should obtain a copy of Security Analysis and a copy of Mr. Klarman's own text [Margin of Safety](https://rads.stackoverflow.com/amzn/click/com/0887305105). Of course, there are plenty of other reasons to read, re-read, re-re-read Security Analysis and [Margin of Safety](https://rads.stackoverflow.com/amzn/click/com/0887305105) beyond just an alternative to Black-Scholes.
null
CC BY-SA 4.0
null
2011-02-01T00:58:55.857
2020-04-02T11:54:30.527
2020-04-02T11:54:30.527
38257
121
null
53
2
null
45
13
null
The risks involved in trading is everywhere and always a multifaceted thing: it includes the volatility of the selected asset, the leverage and concentration of the porfolio, whether there is a stop loss, a hedge, etc. Also, risk management is frequently not tied to the "alpha model" directly (e.g. VaR, shortfall, and scenario testing). For instance, one well known way of sizing a position is [the Kelly formula](http://en.wikipedia.org/wiki/Kelly_criterion): $f^{*} = \frac{bp - q}{b}$ This makes no assumptions about the directional model that is used to enter the position. You can infer the values (e.g. probability of winning) from a historical simulation, regardless of whether the model is black-box, grey-box, or white-box.
null
CC BY-SA 2.5
null
2011-02-01T01:08:27.770
2011-02-01T01:18:55.873
2011-02-01T01:18:55.873
17
17
null
54
2
null
41
15
null
Increased volatility (high VIX) signifies more risk. To keep their portfolio in line with their risk preferences, market participants deleverage. Since long positions outweigh short positions in the market as a whole, deleveraging entails a lot of selling and less buying. The relative increase in selling causes downward pressure on stocks.
null
CC BY-SA 2.5
null
2011-02-01T03:04:25.137
2011-02-01T03:04:25.137
null
null
53
null
55
2
null
47
3
null
The distribution of the log of a stock price in n days is a normal distribution with mean of $\log(current_price)$ and standard deviation of $volatility*\sqrt(n/365.2425)$ if you're using calendar days, and assuming no dividends and 0% risk-free interest rate. Note that the standard deviation is independent of the current_price: if $\log(current_price)$ increases by 0.3 (for example), the stock has increased by 35%, regardless of its current_price. To include dividends and the risk-free interest rate, see: [http://en.wikipedia.org/wiki/Black-Scholes](http://en.wikipedia.org/wiki/Black-Scholes) which models future stock prices w/ an eye towards pricing options.
null
CC BY-SA 2.5
null
2011-02-01T03:12:21.777
2011-02-03T16:39:41.613
2011-02-03T16:39:41.613
117
null
null
56
2
null
27
5
null
"My intuition would be that volatility is a property of the underlying, and should therefore be roughly the same regardless of strike price". I agree, but the market doesn't. People who buy out-of-money calls tend to be more optimistic than those who buy at-the-money calls, so out-of-money calls are "overpriced" and thus have a higher volatility. Oddly, people who buy deep-in-the-money calls ALSO tend to be more optimistic, so these calls ALSO have a higher implied volatility. Why? You can buy a deep-in-the-money call for much less than the stock price. When the stock goes up 1 point, the deep-in-the-money call goes up almost 1 point too, so you get the same gain for less investment (ie, leverage). You probably also noticed implied volatility varies with expiration date too. Ultimately, the market determines how much an option is worth, and thus the volatility. Black-Scholes' belief that volatility was a fundamental characteristic of an instrument isn't really accurate.
null
CC BY-SA 2.5
null
2011-02-01T03:32:58.180
2011-02-01T03:32:58.180
null
null
null
null
57
2
null
37
8
null
[http://ratedata.gaincapital.com/](http://ratedata.gaincapital.com/) has tick by tick historical data for Forex if that helps.
null
CC BY-SA 2.5
null
2011-02-01T03:35:29.347
2011-02-01T03:35:29.347
null
null
null
null
58
2
null
27
14
null
The skew is almost always bid for puts on the stock market. When stocks go down, people tend to panic and volatility goes up as a result. Since the puts get more vega when the market goes down, they trade at higher vols. Read up on stochastic volatility for a more in-depth explanation.
null
CC BY-SA 2.5
null
2011-02-01T03:43:35.793
2011-02-01T03:43:35.793
null
null
83
null
59
2
null
20
5
null
I am not sure I clearly understand your question. But definitely you can do some analysis on the residuals, especially autocorrelation. If you find any significant autocorrelation, I suggest you add a ARMA process to your model to increase the accuracy of your forecast.
null
CC BY-SA 2.5
null
2011-02-01T03:44:01.427
2011-02-01T03:44:01.427
null
null
134
null
60
1
155
null
9
3325
I would like to take advantage of a volatile market by selling highs and buying lows. As we all know the RSI indicator is very bad and I want to create a superior strategy for this purpose. I have tried to model the price using a time varying ARMA process, with no success for now. Any other ideas?
Mean reverting strategies
CC BY-SA 2.5
null
2011-02-01T03:49:06.897
2011-02-03T19:47:03.307
null
null
134
[ "arima", "strategy" ]
61
2
null
26
10
null
For pricing and hedging a portfolio of vanilla options, stochastic volatility is almost always preferable to local volatility since empirically it more accurately captures the evolution of the smile.
null
CC BY-SA 2.5
null
2011-02-01T03:54:24.580
2011-02-01T03:54:24.580
null
null
83
null
62
2
null
20
14
null
I think you're looking for some way to test for autocorrelation in your residuals. If your model is good -- let's say you have an ARMA(1, 1) model for your forecast -- then the residuals from this model will be white noise. Which is to say that the difference between your forecast and the realization can not be predicted any better. The residual is some zero mean normally-distributed error. Let's pick an extreme example. If your residual (the diff between forecast and realized) were always 1, then the residuals would be autocorrelated. Clearly if your model is always off by 1, then you can do better. So if the residuals in your model are autocorrelated, then you can do better. The standard test for this these days in Ljung-Box, but in the past Box-Pierce and Durbin-Watson were also used.
null
CC BY-SA 2.5
null
2011-02-01T03:59:30.460
2011-02-01T03:59:30.460
null
null
106
null
65
2
null
36
5
null
So my first answer was off base. For some reason I was thinking first moment (idiosyncratic returns), but he's looking for second moment (idiosyncratic volatility). There is a line of research on the returns to portfolios sorted on idiosyncratic volatility and I was hoping that there were descriptive statistics that said "fraction $\rho$ of stock/portfolio vol is market and $1 - \rho$ of stock/portfolio vol is idiosyncratic". But I couldn't find any stats. So I guess you have to run your own models and find out. If you're thinking about implementing this, I would check out the literature. It may save you some time. [Ang, Hodrick, Xing, and Zhang](http://www.gsb.columbia.edu/faculty/aang/papers/vol.pdf) (Journal of Finance 2006) say that high ivol has low returns. But [Bali and Cakici](http://faculty.baruch.cuny.edu/tbali/BaliCakiciJFQA2008.pdf) (Journal of Finance and Quantitative Analysis 2008) show that their results are do to sorting techniques and that there's really no return (pos or neg) to high ivol. --- I may be off base here. But in terms of econometrics, when you calculate beta, you've assumed that the idiosyncratic portion is white noise. So the idiosyncratic portion is mean zero and normally distributed so it accounts for none of the return. So the ratio of market:idiosyncratic is 1:0. By definition the idiosyncratic return should be white noise. Maybe you want to add another risk factor to your model and find the projection of stock returns on that factor to find the split between the market and that other factor?
null
CC BY-SA 2.5
null
2011-02-01T04:17:33.630
2011-02-02T12:55:44.147
2011-02-02T12:55:44.147
106
106
null
66
1
70
null
13
3955
Explain pair trading to a layman. What is it, why would you want to do it, and what are the risks? Provide a real life example.
How does pair trading work?
CC BY-SA 3.0
null
2011-02-01T04:19:13.130
2017-05-30T23:29:16.827
2012-02-24T07:08:05.813
74
74
[ "pairs-trading" ]
67
2
null
25
7
null
Ed Thorp is of the opinion that he could price options properly before Fischer and Myron: [link here (doc)](http://www.google.ca/url?sa=t&source=web&cd=3&ved=0CCkQFjAC&url=http%3A%2F%2Fwww.edwardothorp.com%2Fsitebuildercontent%2Fsitebuilderfiles%2Foptiontheory.doc&ei=q4hHTbr-CIiqsAOt6rCiAg&usg=AFQjCNHsCRmQcDkfu9F9o5DIFsTxUeK5Xg&sig2=O3SvA_OeVB-1n3Qif4LjgA) Sounds like he was using a risk-neutral approach
null
CC BY-SA 2.5
null
2011-02-01T04:54:02.633
2011-02-01T04:54:02.633
null
null
74
null
68
2
null
45
20
null
ML/AI systems are susceptible to a number of risks not traditionally discussed in risk management: - What I call 'backtest arbitrage'. In the process of automated model generation and testing, your machine learner may discover, exploit, and concentrate on irregularities in your backtesting system which do not exist in the real world. If, for example, your fill simulation is erroneous, you have not accounted for borrow costs, forgot to deal with dividends properly, etc., sufficiently powerful search techniques will find strategies which capture these nonexistent 'arbs'. - If you sequentially generate, test, and refine many trading models, you run into the problem of 'datamining bias'. Here one has used the same data to simultaneously select the best model and estimate its performance via backtest. The estimate will be positively biased, and the size of the bias can be difficult to estimate if one has not kept careful track of all the strategies tested. - Blackbox models are often subject to non-stationarities of the 'Grue and Bleen variety'. That is they may behave radically different out of sample due to non-stationarities of their input data and discontinuities in their processing of input data. An example would be an AI strategy which first checks if VIX is above 60, then trades one substrategy, otherwise it trades a different one. Your backtest period may contain little data in the 'over 60' regime, and you may find yourself in such a regime. Regrettably many of these issues exist at the human level, and there is little one can do statistically to detect them or correct for them. They require great attention to process.
null
CC BY-SA 2.5
null
2011-02-01T05:22:16.193
2011-02-01T05:22:16.193
null
null
108
null
70
2
null
66
17
null
Pair trading is a market neutral bet. Instead of saying the market in general is going higher, you say one investment under/overvalued relative to another, typically similar, investment. The bet is that the spread between the two will widen or narrow depending on how you set it up. For instance, say I feel GM is going to outperform Ford over the next year. I will buy GM's stock and short Ford's stock. By doing this the market is taken out of the picture, and I make money if the difference between GM's stock and Ford's is greater than it was when I undertook the investment.
null
CC BY-SA 2.5
null
2011-02-01T06:05:48.380
2011-02-01T20:00:56.730
2011-02-01T20:00:56.730
60
60
null
71
2
null
35
2
null
In response to your last question, "how do you use beta?" - I'd say try to as little as possible. Your use seems to be a bit out of the realm of what I'm used to, but whatever beta you get is so prone to observational error, that it may not be meaningful.
null
CC BY-SA 2.5
null
2011-02-01T06:15:06.667
2011-02-01T06:15:06.667
null
null
60
null
73
2
null
44
20
null
Short of having a 'reasonable' predictive model for expected returns and the covariance matrix, there are a couple lines of attack. - Shrinkage estimators (via Bayesian inference or Stein-class of estimators) - Robust portfolio optimization - Michaud's Resampled Efficient Frontier - Imposing norm constraints on portfolio weights Naively, shrinkage methods 'shrink' (of course,no?) your estimates (arrived at using historical data), toward some global mean or some target. Within the mean-variance framework, you can use the shrinkage estimators, for both, the expected returns vector, as well as the covariance matrix. Jorion introduced application of a 'Bayes-Stein estimator' to portfolio analysis. Bradley & Efron have a paper on the James-Stein estimator. Alternatively, you can stick to the global minimum variance portfolio, which is less susceptible to estimation errors (in expected returns)), and use either the sample covariance matrix or a shrunk estimate. Robust portfolio optimization seems to be another way 'nicer' portfolios can be constructed. I haven't studied this in any detail, but there's a paper by Goldfarb & Iyengar. Michaud's Resampled Efficient Frontier is an application of Monte Carlo and bootstrap to addressing the uncertainty in the estimates. It is a way of 'averaging' out the frontier and it perhaps is best to read up Michaud's book or paper to know what they really have to say. Finally, there might be a way to directly impose constraints on the norm of the portfolio weight vector which would be equivalent to regularization in the statistical sense. Having said all that, having a good predictive model for E[r] and Sigma, is perhaps worth the effort. References: Jorion, Philippe, "Bayes-Stein Estimation for Portfolio Analysis", Journal of Financial and Quantitative Analysis, Vol. 21, No. 3, (September 1986), pp. 279-292. Philippe Jorion, "Bayesian and CAPM estimators of the means: Implications for portfolio selection", Journal of Banking & Finance, Volume 15, Issue 3, June 1991 Robert R. Grauer and Nils H. Hakansson, "Stein and CAPM estimators of the means in asset allocation", International Review of Financial Analysis, Volume 4, Issue 1, 1995, Pages 35-66 Donald Goldfarb, Garud Iyengar: "Robust Portfolio Selection Problems". Math. Oper. Res. 28(1): 1-38 (2003) Michaud, R. (1998). Efficient Assset Management: A Practial Guide to Stock Portfolio Optimization, Oxford University Press.
null
CC BY-SA 2.5
null
2011-02-01T06:50:51.180
2011-02-01T06:50:51.180
null
null
139
null
74
1
94
null
46
76756
There are all kinds of tools for backtesting linear instruments (like stocks or stock indices). It is a completely different story when it comes to option strategies. Most of the tools used are bespoke software not publicly available. Part of the reason for that seems to be the higher complexity involved, the deluge of data you need (option chains) and the (non-)availability of historical implied vola data. Anyway, my question: Are there any good, usable tools for backtesting option strategies (or add-ons for standard packages or online-services or whatever). Please also provide infos on price and quality of the products if possible. P.S.: An idea to get to grips with the above challenges would be a tool which uses Black-Scholes - but with historical vola data (e.g. VIX which is publicly available).
Are there any good tools for back testing options strategies?
CC BY-SA 3.0
null
2011-02-01T07:54:30.207
2022-12-17T00:59:35.300
2011-09-10T02:35:03.620
1355
12
[ "backtesting", "option-strategies", "software" ]
75
2
null
25
14
null
There is a missing link to early options pricing literature which had been overlooked. Put-call parity along with static delta hedging were understood in actionable detail well before BSM and trading and risk management were accomplished through heuristic methods which indeed continued to be used after BSM. Would point to [ "Why we Have Never Used the Black-Scholes-Merton Option Pricing Formula"](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.581.884&rep=rep1&type=pdf) which gives an interesting overview and supplies further historical references.
null
CC BY-SA 4.0
null
2011-02-01T08:07:31.040
2020-04-02T11:54:56.323
2020-04-02T11:54:56.323
38257
113
null
76
2
null
44
14
null
Both answers from Shane and Vishal Belsare make sense and detail different models. In my experience, I have never been satisfied by a unique model since the majority of papers out there can be split in two categories: - Those that predict the mean component of the problem. - Those that predict the variance component of the problem. The ideal (to read "practical") model would be the one that allows you to incorporate your own views in both expectation of returns and variance. On the expected returns, Black-Litterman seems interesting since it enables you to get a relative point of view of the expectations which is far more stable and less risky than absolute expected returns. On the variance side, you can use two variance matrices. Theoritically, this would be using a markov switch regime regression or a 2-state regression. There is enough literature on the markov switch model that you can read, the latter model is simpler and easier to use. It consists in considering the returns of your assets as a bivariate normal distribution, one that explains the returns in quiet state of the market and the other explains the hectic state in the market. The result of such regression would be a variance matrix conditioned by the state of the market. (You can use, then, the VIX as a proxy of the state of the market in order to choose between both). I have tried in the past different models, but, to my opinion, this framework seems to be ahead of the theoretical ones. I'll add some references that may be of interest: Kim, J. and Finger, C., A Stress Test to Incorporate Correlation Breakdown, Journal of Risk, Spring 2000 McLachlan, G. and Basford, K., Mixture models: Inference and Applications to Clustering, Marcel Dekker Inc., 1988
null
CC BY-SA 2.5
null
2011-02-01T08:09:16.683
2011-02-01T08:09:16.683
null
null
150
null
77
2
null
26
17
null
There is another reason why Stoc Vol Models should be usually preferred to Local Vol Models, this reason is explained in the Hagan et al. paper ["Managing Smile Risk"](http://www.math.columbia.edu/~lrb/sabrAll.pdf) about SABR process and is in simple terms the fact that "smile dynamics" is poorly predicted by local vol models leading to bad Hedging of exotic options. Anyway Local Vol models have the good feature to be "arbitrage free" (at the begining) and I think that some link between both approach can be achieved by Markovian Projection Method.for this you can have a look at V. Piterbarg's [paper](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=906473) on the subject and the references therein. Regards
null
CC BY-SA 3.0
null
2011-02-01T08:46:35.637
2014-01-27T16:38:56.060
2014-01-27T16:38:56.060
null
92
null
78
2
null
25
6
null
You find lots of info in part 3 "(3. Myth 1: people did not properly “price” options before the Black–Scholes–Merton theory)" of this paper: "Option traders use (very) sophisticated heuristics, never the Black–Scholes–Merton formula": [http://linkinghub.elsevier.com/retrieve/pii/S0167268110001927](http://linkinghub.elsevier.com/retrieve/pii/S0167268110001927) ([a free preprint can be found here](https://docs.google.com/file/d/0B_31K_MP92hUNjljYjIyMzgtZTBmNS00MGMwLWIxNmQtYjMyNDFiYjY0MTJl/edit?hl=en_GB) Page 217) Another source is Derivative Pricing 60 Years before Black–Scholes: Evidence from the Johannesburg Stock Exchange by LYNDON MOORE and STEVE JUH From the abstract: > We obtain daily data for warrants traded on the Johannesburg Stock Exchange between 1909 and 1922, and for a broker’s call option quotes on stocks from 1908 to 1911. We use this new data set to test how close derivative prices are to Black–Scholes (1973) prices and to compute profits for investors using a simple trading rule for call options. We examine whether investors exercised warrants optimally and how they reacted to extensions of the warrants’ durations. We show that long before the development of the formal theory, investors had an intuitive grasp of the determinants of derivative pricing. Source: [http://www.buec.udel.edu/coughenj/der_bs_johannesburg.pdf](http://www.buec.udel.edu/coughenj/der_bs_johannesburg.pdf)
null
CC BY-SA 3.0
null
2011-02-01T08:52:33.347
2012-12-14T13:40:29.910
2012-12-14T13:40:29.910
12
12
null
79
1
null
null
38
14131
Naively, it seems that Bayesian modeling, structural models particularly, would be quite useful in finance because of their ability to incorporate market idiosyncrasies and produce accurate probabilistic estimates. The down-side of course, is model-brittleness and extremely slow computational speed. Has the Quant community overcome these issues, and how common are these tools?
How useful is Markov chain Monte Carlo for quantitative finance?
CC BY-SA 3.0
null
2011-02-01T10:09:20.380
2011-09-09T13:13:58.610
2011-09-08T21:00:20.593
1106
163
[ "probability", "monte-carlo", "modeling" ]
80
1
null
null
11
1369
Are the prices of e-minis such as S&P 500, Russell 2000, EUROSTOXX, etc. manipulated? That is, are there traders who trade large enough positions to make the price go in the direction they want, taking unfair advantage of the rest of the traders in that market? If so, what implication does this have for designing any trading systems for those markets? Is it essentially a fool's errand unless you have enough money to be one of the "manipulators"? I once heard an experienced trader say (to retail traders): > The first thing you have to understand about this market is that this market was not created for you to make money. It was created for the big players to make money - from you!
Are e-mini markets manipulated?
CC BY-SA 2.5
null
2011-02-01T10:35:02.527
2015-06-20T21:58:58.593
null
null
164
[ "market-impact" ]
81
1
null
null
26
4176
Mean-reversion and trend-following strategies have some kind of a theory behind them that explains why they might work, if implemented well. Pattern-recognition, on the other hand, seems like nothing more than data mining and overfitting. Could patterns possibly have any predictive value? In other words, is there any theoretical reason why a pattern observed in historical data would be repeated in the future, other than random chance or self-fulfilling prophecy where the pattern "works" because enough traders use it?
Is there any theoretical basis for pattern-recognition strategies?
CC BY-SA 2.5
null
2011-02-01T10:58:19.637
2019-06-17T11:52:50.683
null
null
164
[ "theory" ]
82
1
130
null
27
4933
I heard about MetaTrader from [http://www.metaquotes.net](http://www.metaquotes.net). Is there any other framework or program available? Do you use different software for backtracking and running your trading algorithms? Thank you guys for your great answers. I will check out the posted applications.
What kind of basic framework or application do you use to run your trading algorithms?
CC BY-SA 2.5
null
2011-02-01T11:10:04.150
2011-02-09T03:59:00.753
2011-02-04T17:55:24.680
53
26
[ "data", "software", "programming", "algorithm" ]
83
2
null
81
8
null
General answer to a very general question: If you find a significant pattern which distinguishes between structure and noise you understand something about that system. You have a model about it so you can extrapolate and forecast. On that basis you can use this model to make money. In that sense mean-reversion and trend-following are also "only" strategies that use a model derived from the data (or where ever from). Take evolution as a proxy: The living organism also have a model about their environment (which is partly stochastic, too). The successful organisms use that to survive and breed. As an aside: In terms of a philosophical basis you can say that it is the belief that the past has meaning for the future (inductive argument) - but this is only a belief which cannot be justified in itself (saying that it worked well in the past is itself inductive and therefore we have a catch-22 here)
null
CC BY-SA 2.5
null
2011-02-01T11:18:29.167
2011-02-01T11:18:29.167
null
null
12
null
84
1
110
null
28
29840
I know the derivation of the Black-Scholes differential equation and I understand (most of) the solution of the diffusion equation. What I am missing is the transformation from the Black-Scholes differential equation to the diffusion equation (with all the conditions) and back to the original problem. All the transformations I have seen so far are not very clear or technically demanding (at least by my standards). My question: Could you provide me references for a very easily understood, step-by-step solution?
Transformation from the Black-Scholes differential equation to the diffusion equation - and back
CC BY-SA 2.5
null
2011-02-01T11:43:21.703
2019-07-29T13:12:38.367
2011-02-09T20:09:50.543
70
12
[ "black-scholes", "differential-equations" ]
85
1
88
null
48
5253
Back in the mid 90's I used the Black-Scholes Model and the Cox-Ross-Rubenstein (Binomial) Model's to price Options. That was nearly 15 years ago and I was wondering if there are any new models being used to price Options?
Are there any new Option pricing models?
CC BY-SA 2.5
null
2011-02-01T12:04:09.257
2016-02-23T09:29:50.377
null
null
169
[ "black-scholes", "option-pricing" ]
86
2
null
14
5
null
I handle volatility curves where moneyness is quoted in delta by an iterative guess: - Use an initial guess for delta of 0.5 (call)/-0.5 (put) - Look up the volatility on the curve using the guess for delta - Calculate delta for the option using the vol found in 2. - Repeat using Newton-Raphson, until the difference in delta is small enough.
null
CC BY-SA 2.5
null
2011-02-01T12:22:47.013
2011-02-01T12:22:47.013
null
null
22
null
87
2
null
81
13
null
Weak form market efficiency says that you can't predict prices based on past prices. Or that technical analysis doesn't work. I think that the tests of weak form market efficiency are pretty conclusive and show that the US stock market is weak-form efficient; at least on a a timeline longer than a few minutes. That's not to say that markets are "efficient". The tests of semi-strong form efficiency (i.e., can't predict prices from all public info) are still debated a little, but I think most would say that markets are not semi-strong form efficient. You can do fundamental analysis have a chance a determining winners and losers. And markets are definitely not strong form efficient (i.e., can't predict prices from all info, public and private). So does technical analysis work? I don't think so. Some may be earning abnormal returns, but they're likely taking risks that aren't obvious from the charts. Or in the context of mean reversion, yes, most of the time things revert to the mean, but they may not revert to the mean within your tolerance for pain. I think the best light read on the mean reversion topic is "When Genius Failed". Their convergence trades on off-the-run and on-the-run Treasuries were "right", but they went further away from the mean before converging after insolvency.
null
CC BY-SA 2.5
null
2011-02-01T12:30:31.527
2011-02-01T12:30:31.527
null
null
106
null
88
2
null
85
26
null
Black-Scholes itself didn't change a lot but we can now adjust it to deal with a lot more complicated factors to price more complicated contracts: - stochastic volatility (Heston, Gatheral) - stochastic rates (Hull) - credit risk - dividends Other methods (computing intensive) have also evolved to deal with various types of contracts where BS is not very appropriate choice (e.g. Monte Carlo simulation for path-dependant options).
null
CC BY-SA 3.0
null
2011-02-01T12:36:43.923
2011-12-10T20:32:50.160
2011-12-10T20:32:50.160
35
15
null
89
2
null
82
13
null
Some of us see this as a data-driven, empirical problem. And for Programming with Data, you could do a lot worse than picking [R](http://www.r-project.org) which was made for the task. The [CRAN Task View on Finance](http://cran.r-project.org/web/views/Finance.html) lists a number of relevant packages. For trading strategies in particular, the quantstrat and blotter packages --which are both still on [R-Forge in the TradeAnalytics bundle](http://r-forge.r-project.org/R/?group_id=316)--are a very good start and are often discussed on the [R-SIG-Finance](https://stat.ethz.ch/pipermail/r-sig-finance/) mailing list.
null
CC BY-SA 2.5
null
2011-02-01T12:58:59.027
2011-02-01T15:25:52.530
2011-02-01T15:25:52.530
69
69
null
91
2
null
85
8
null
There are plenty of other models You can also add all the exponential Lévy processes with or without time change and also other stochastic volatility models such as SABR. I must add that there exist a paradigm different of the "risk neutral pricing" (mainly developped by Platen and Heath) called "Benchmark Pricing" and which is in a way (that I do not fully understand yet), more general than "Risk Neutral paradigm". The biggest problem being that calculation and determination of the benchmark protfolio doesn't seem easy to achieve in this "supermartingale framework". Regards
null
CC BY-SA 2.5
null
2011-02-01T13:27:53.360
2011-02-02T11:22:54.870
2011-02-02T11:22:54.870
92
92
null
92
2
null
79
9
null
As far as I know MCMC and also (PMCMC) can be usefull for (bayesian) estimation of parameters of some Hidden process like in the Heston Model case based on observations of the Stock (filtering). But the problem here is that those estimates are not matching those based on calibration of vanilla options of the Risk Neutral measure. So as an econometric tool it has limited utility in my opinion for financial application. As an example, let's say that thank's to MCMC methods you've got an estimate of the parameters of Heston Model on a given stock based on the observations of the Stock values. Then you can (I won't blame you for that) hedge a call option on this stock using Heston Model based on your estimates. Nevertheless if there is a market for call options on this stock then you shall observe that the calibration of the Heston Model based on the vanilla prices will give you another set of parameters. So what should we do then ? Please do not forget that when filtering you are under Real World Probabilities but when you are hedging and pricing you are under Risk Neutral Probabilities. Definitely I won't follow (blindly) the filtering estimates mainly for the following reason which can be summed up in rather provocative way "Market is always right". I say this because if you are Marking to Market you position (as every one does) then you must use the calibration estimate to value your portfolio, now those calibration estimates can evolves in a way that goes against your filtering estimates and there is nothing you can do about this. Finally if your stop loss is attained (you should better have one) then even though you believe you will make money out of your filtering strategy by holding the strategy till the end of the contract you have to realize that it is not an arbitrage strategy and that you are entered in a risky position not because of the filtering estimate that was badly calculated but because the market can evolve against the best past history estimates. I hope I made my point clear. Nevertheless as a tool to sample difficult to simulate random variables, it can be used as a tool for pricing. I think that I have seen a paper on arXiv using MCMC techniques to price american options.
null
CC BY-SA 3.0
null
2011-02-01T13:36:51.413
2011-09-09T13:13:58.610
2011-09-09T13:13:58.610
1355
92
null
93
2
null
36
5
null
The term 'rule of thumb' is ambiguous here. Because I don't think there are any rule of thumb, you just need to do the number crunching. However there are some stable characteristic through time linked to correlation. For instance it is a common fact that the hierarchy of correlation within different market is relatively stable. US equities are less correlated than Europe. pre-crisis it was around 40 for the us, and 60 for Europe I think. Obviously it shooted up after.
null
CC BY-SA 2.5
null
2011-02-01T14:19:58.550
2011-02-01T14:19:58.550
null
null
172
null
94
2
null
74
17
null
A few pointers: - When I looked into this a few years ago, a good solution at the time was LIM's XMIM, which also has an S-Plus/Matlab interface. Whit Armstrong also provided an R package for this, although I don't know how complete it is. This provides both the data and the software for analysis. - On the very high end (and expensive) side of the spectrum, OneTick and KDB are both being used for this purpose by professional money managers. - Two tools used by the non-professional community are available from brokers: https://www.thinkorswim.com/ or http://www.optionvue.com/.
null
CC BY-SA 2.5
null
2011-02-01T14:34:38.097
2011-02-01T14:46:42.340
2011-02-01T14:46:42.340
17
17
null
95
2
null
27
11
null
It can be shown using a combination of calendar and butterfly that one can lock now the future variance conditionally to the spot being around some specific level (local vol). So if you bought it and it gets realized higher and the spot is there, you get money. if the spot is not there, you are neutral. Another way to look at that dependency of spot level and vol level is just using a regular delta hedge strategy, whose PL is path dependent on what spot level is (wrt option strike) when volatility gets realized : your gamma is located around the strike, so if vol is high around here, you will pocket much (or loose if it is stall), whereas is it moves away from it, your lack of gamma will make you insensitive to it. These dependency combined with the market sentiment that volatility is higher when spot goes down leads to higher vol price for options with lower strike.
null
CC BY-SA 4.0
null
2011-02-01T14:44:06.687
2019-08-27T13:22:09.873
2019-08-27T13:22:09.873
172
172
null
96
1
2045
null
24
12729
What is a coherent risk measure, and why do we care? Can you give a simple example of a coherent risk measure as opposed to a non-coherent one, and the problems that a coherent measure addresses in portfolio choice?
What is a "coherent" risk measure?
CC BY-SA 2.5
null
2011-02-01T14:45:48.957
2019-07-13T21:20:12.793
null
null
114
[ "risk", "modern-portfolio-theory", "coherent-risk-measure" ]
97
2
null
82
5
null
We use intensively MetaTrader but it has a lot of problems... we waste a lot of time with turnarounds and dirty tricks to make things work. Now we're moving some developments to Matlab because it is stable and great for quick prototiping, also you can use free software like Octave, R, Maxima and Sagemath (wich I think englobes all these others applications).
null
CC BY-SA 2.5
null
2011-02-01T14:51:00.853
2011-02-01T14:51:00.853
null
null
82
null
98
2
null
37
12
null
You can download data from 32 forex pairs from the Dukascopy's JForex platform, tick by tick since 2003. I think it is very accurate relative to its price (free). You can download their different formats by starting [here](http://www.dukascopy.com/swiss/english/data_feed/historical/): (no registration required).
null
CC BY-SA 2.5
null
2011-02-01T14:54:38.387
2011-02-08T22:51:37.917
2011-02-08T22:51:37.917
279
82
null
99
2
null
66
1
null
"Quantitative Trading", Ernie Chan's book is a good starting point to learn pairs trading.
null
CC BY-SA 2.5
null
2011-02-01T14:59:18.243
2011-02-01T14:59:18.243
null
null
82
null
101
2
null
82
9
null
I develop strategies for a lot of these different platforms and the one that I feel offers the most is [NinjaTrader](http://www.ninjatrader.com). It uses C# which is a bit slower than MetaTrader, which if I remember correctly uses a variant of C++, in fact in MT5 there should be almost no difference. However, it makes up for the slowness in spades with the freedom it allows you. Not only that, but I've had the least problems with it, contrast that with MetaTrader, which I absolutely dread coding in because everything feels like a hack and the historical backtester is just terrible. And if memory serves they don't have near the charting capabilities and they only offer a few periods with which to view the data. One thing to note, NinjaTrader isn't free like MetaTrader. (You can't trade live without paying for it, however everything else is free) It's \$1000/lifetime or \$180/quarter, and they don't offer refunds so be sure you find a brokerage with a free trial before you go paying for it. If you're looking for a free alternative, I have played a little with [OpenECry](http://www.openecry.com), and they seem to do a lot of things right that NinjaTrader does wrong, but it doesn't offer near the freedom that Ninja does either. So it's really a trade off and I feel Ninja wins.
null
CC BY-SA 2.5
null
2011-02-01T15:14:32.753
2011-02-01T15:14:32.753
null
null
173
null
103
1
null
null
49
49367
I've struggled for a long time to understand this - What is this? And how does it affect you? Yes I mean risk neutral pricing - Wilmott Forums was not clear about that.
How does the "risk-neutral pricing framework" work?
CC BY-SA 3.0
null
2011-02-01T15:29:19.683
2022-11-23T21:42:39.093
2013-03-06T12:01:33.100
35
103
[ "risk", "risk-management" ]
104
1
null
null
3
889
Any recommendations on the best schools and overall education choices for quantitative finance?
What are the best master programmes for someone interested in a career in quantitative finance?
CC BY-SA 2.5
null
2011-02-01T15:59:30.160
2011-02-02T05:56:49.757
null
null
147
[ "finance", "education", "career" ]
105
2
null
104
5
null
Quantnet provided a [rankings of MS in Financial Engineering programs](http://www.quantnet.com/mfe-programs-rankings/), which can be used for reference. More generally, this really depends on what area of quantitative finance that interests you: If you want to work in developing valuation and pricing models, than one of these programs will be very useful. If you want to work in quantitative trading, it's slightly less clear. Quants in other areas can have varied educational backgrounds, ranging from a Ph.D. in Physics to degrees in Computer Science or Engineering. High frequency trading, for instance, is very technical and will often include people with more of a computer science background. In my experience, statistics and machine learning is also a very useful specialty.
null
CC BY-SA 2.5
null
2011-02-01T16:06:23.473
2011-02-01T16:23:43.173
2011-02-01T16:23:43.173
43
17
null
106
2
null
104
3
null
It's often hard to give a definite answer to such "what is top programs" questions. When we did the Quantnet MFE ranking in 2009, it was more or less as a guide for people new to this field since it's hard to find information on these MFE programs. A lot of your choice will come down to personal preferences such as location, tuition, length, program strength. Many people have used our ranking to do further research on programs they may have never heard of before. We are working on the 2011 ranking which is more comprehensive.
null
CC BY-SA 2.5
null
2011-02-01T16:22:05.473
2011-02-01T16:22:05.473
null
null
43
null
107
2
null
103
39
null
I assume you mean risk neutral pricing? Think of it this way (beware, oversimplification ahead ;-) You want to price a derivative on gold, a gold certificate. The product just pays the current price of an ounce in $. Now, how would you price it? Would you think about your risk preferences? No, you won't, you would just take the current gold price and perhaps add some spread. Therefore the risk preferences did not matter (=risk neutrality) because this product is derived (= derivative) from an underlying product (=underlying). This is because all of the different risk preferences of the market participants is already included in the price of the underlying and the derivative can be hedged with the underlying continuously (at least this is what is often taken for granted). As soon as the price of the gold certificate diverges from the original price a shrewd trader would just buy/sell the underlying and sell/buy the certificate to pocket a risk free profit - and the price will soon come back again... So, you see, the basic concept of risk neutrality is quite natural and easy to grasp. Of course, the devil is in the details... but that is another story. See also my answer to a similar question here: [Why Drifts are not in the Black Scholes Formula](https://quant.stackexchange.com/questions/8247/why-drifts-are-not-in-the-black-scholes-formula/8252#8252)
null
CC BY-SA 3.0
null
2011-02-01T16:37:43.960
2018-04-07T09:44:45.210
2018-04-07T09:44:45.210
12
12
null
108
2
null
96
3
null
Coherent risk measures were created to address the problem that extant risk measures, like VaR, did not: namely that a risk measure should reward diversification.
null
CC BY-SA 2.5
null
2011-02-01T16:57:38.543
2011-02-01T16:57:38.543
null
null
108
null
109
2
null
37
20
null
[DTN's IQFeed](http://www.iqfeed.net/index.cfm?displayaction=data&section=services) is really good, if a little expensive. I believe it starts at 80 dollars/month and then you add your exchange fees on top. To get access to the developer API you need to pay 300 dollars for a year's worth of access. Details: - Real-Time, TRUE Tick-by-Tick Data on US and Canadian Equities (NYSE, NASDAQ, AMEX, Canadian Stock Exchanges) - Delayed Futures Data (Real-Time Data Available for an additional fee) - Real-Time Equity/Index Options and Forex Data Available for an additional fee - Real Time Index quotes
null
CC BY-SA 2.5
null
2011-02-01T17:04:10.337
2011-02-01T19:59:24.283
2011-02-01T19:59:24.283
8
173
null
110
2
null
84
37
null
One starts with the Black-Scholes equation $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2S^2\frac{\partial^2 C}{\partial S^2}+ rS\frac{\partial C}{\partial S}-rC=0,\qquad\qquad\qquad\qquad\qquad(1)$$ supplemented with the terminal and boundary conditions (in the case of a European call) $$C(S,T)=\max(S-K,0),\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(2)$$ $$C(0,t)=0,\qquad C(S,t)\sim S\ \mbox{ as } S\to\infty.\qquad\qquad\qquad\qquad\qquad\qquad$$ The option value $C(S,t)$ is defined over the domain $0< S < \infty$, $0\leq t\leq T$. Step 1. The equation can be rewritten in the equivalent form $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2\left(S\frac{\partial }{\partial S}\right)^2C+\left(r-\frac{1}{2}\sigma^2\right)S\frac{\partial C}{\partial S}-rC=0.$$ The change of independent variables $$S=e^y,\qquad t=T-\tau$$ results in $$S\frac{\partial }{\partial S}\to\frac{\partial}{\partial y},\qquad \frac{\partial}{\partial t}\to - \frac{\partial}{\partial \tau},$$ so one gets the constant coefficient equation $$\frac{\partial C}{\partial \tau}-\frac{1}{2}\sigma^2\frac{\partial^2 C}{\partial y^2}-\left(r-\frac{1}{2}\sigma^2\right)\frac{\partial C}{\partial y}+rC=0.\qquad\qquad\qquad(3)$$ Step 2. If we replace $C(y,\tau)$ in equation (3) with $u=e^{r\tau}C$, we will obtain that $$\frac{\partial u}{\partial \tau}-\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial y^2}-\left(r-\frac{1}{2}\sigma^2\right)\frac{\partial u}{\partial y}=0.$$ Step 3. Finally, the substitution $x=y+(r-\sigma^2/2)\tau$ allows us to eliminate the first order term and to reduce the preceding equation to the form $$\frac{\partial u}{\partial \tau}=\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial x^2}$$ which is the standard [heat equation](http://en.wikipedia.org/wiki/Heat_equation). The function $u(x,\tau)$ is defined for $-\infty < x < \infty$, $0\leq\tau\leq T$. The terminal condition (2) turns into the initial condition $$u(x,0)=u_0(x)=\max(e^{\frac{1}{2}(a+1)x}-e^{\frac{1}{2}(a-1)x},0),$$ where $a=2r/\sigma^2$. The solution of the heat equation is given by the well-known [formula](http://en.wikipedia.org/wiki/Heat_equation#Homogeneous_heat_equation) $$u(x,\tau)=\frac{1}{\sigma\sqrt{2\pi \tau}}\int_{-\infty}^{\infty} u_0(s)\exp\left(-\frac{(x-s)^2}{2\sigma^2 \tau}\right)ds.$$ Now, if we evaluate the integral with our specific function $u_0$ and return to the old variables $(x,\tau,u)\to(S,t,C)$, we will arrive at the usual Black–Merton-Scholes formula for the value of a European call. The details of the calculation can be found e.g. in [The Mathematics of Financial Derivatives](http://books.google.com/books?id=VYVhnC3fIVEC&printsec=frontcover&dq=the+mathematics+of+financial+derivatives&source=bl&ots=-Mbgb0U8Ac&sig=deXrx-p9jKB-FoqZhuZNmWZlZQU&hl=en&ei=_UtITcyiN4SYOs3j5acE&sa=X&oi=book_result&ct=result&resnum=5&ved=0CD0Q6AEwBA#v=o) by Wilmott, Howison, and Dewynne (see Section 5.4).
null
CC BY-SA 3.0
null
2011-02-01T18:13:27.523
2014-01-27T20:19:10.943
2014-01-27T20:19:10.943
70
70
null

Dataset Card for "quant-stackexchange-posts"

More Information needed

Downloads last month
0
Edit dataset card